Apr 23 17:50:00.140451 ip-10-0-142-106 systemd[1]: kubelet.service: Failed to load environment files: No such file or directory Apr 23 17:50:00.140464 ip-10-0-142-106 systemd[1]: kubelet.service: Failed to run 'start-pre' task: No such file or directory Apr 23 17:50:00.140471 ip-10-0-142-106 systemd[1]: kubelet.service: Failed with result 'resources'. Apr 23 17:50:00.140702 ip-10-0-142-106 systemd[1]: Failed to start Kubernetes Kubelet. Apr 23 17:50:10.371867 ip-10-0-142-106 systemd[1]: kubelet.service: Failed to schedule restart job: Unit crio.service not found. Apr 23 17:50:10.371883 ip-10-0-142-106 systemd[1]: kubelet.service: Failed with result 'resources'. -- Boot 165eb9cfaff94cb68d120f9a5b46a803 -- Apr 23 17:52:41.991814 ip-10-0-142-106 systemd[1]: Starting Kubernetes Kubelet... Apr 23 17:52:42.408778 ip-10-0-142-106 kubenswrapper[2574]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 23 17:52:42.408778 ip-10-0-142-106 kubenswrapper[2574]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Apr 23 17:52:42.408778 ip-10-0-142-106 kubenswrapper[2574]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 23 17:52:42.408778 ip-10-0-142-106 kubenswrapper[2574]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 23 17:52:42.408778 ip-10-0-142-106 kubenswrapper[2574]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 23 17:52:42.412026 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.411951 2574 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 23 17:52:42.415692 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415677 2574 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:52:42.415692 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415693 2574 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:52:42.415761 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415698 2574 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:52:42.415761 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415702 2574 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:52:42.415761 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415705 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:52:42.415761 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415708 2574 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:52:42.415761 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415711 2574 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:52:42.415761 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415714 2574 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:52:42.415761 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415717 2574 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:52:42.415761 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415719 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:52:42.415761 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415722 2574 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:52:42.415761 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415725 2574 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:52:42.415761 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415728 2574 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:52:42.415761 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415731 2574 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:52:42.415761 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415734 2574 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:52:42.415761 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415737 2574 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:52:42.415761 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415739 2574 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:52:42.415761 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415742 2574 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:52:42.415761 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415745 2574 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:52:42.415761 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415747 2574 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:52:42.415761 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415750 2574 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:52:42.415761 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415752 2574 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:52:42.416232 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415755 2574 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:52:42.416232 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415758 2574 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:52:42.416232 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415760 2574 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:52:42.416232 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415763 2574 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:52:42.416232 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415766 2574 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:52:42.416232 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415769 2574 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:52:42.416232 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415772 2574 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:52:42.416232 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415775 2574 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:52:42.416232 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415777 2574 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:52:42.416232 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415780 2574 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:52:42.416232 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415783 2574 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:52:42.416232 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415785 2574 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:52:42.416232 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415788 2574 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:52:42.416232 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415791 2574 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:52:42.416232 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415793 2574 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:52:42.416232 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415795 2574 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:52:42.416232 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415798 2574 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:52:42.416232 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415801 2574 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:52:42.416232 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415804 2574 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:52:42.416711 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415806 2574 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:52:42.416711 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415808 2574 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:52:42.416711 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415811 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:52:42.416711 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415813 2574 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:52:42.416711 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415816 2574 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:52:42.416711 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415820 2574 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:52:42.416711 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415826 2574 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:52:42.416711 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415829 2574 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:52:42.416711 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415831 2574 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:52:42.416711 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415833 2574 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:52:42.416711 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415836 2574 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:52:42.416711 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415839 2574 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:52:42.416711 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415841 2574 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:52:42.416711 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415844 2574 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:52:42.416711 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415848 2574 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:52:42.416711 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415851 2574 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:52:42.416711 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415854 2574 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:52:42.416711 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415856 2574 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:52:42.416711 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415859 2574 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:52:42.417168 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415861 2574 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:52:42.417168 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415864 2574 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:52:42.417168 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415866 2574 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:52:42.417168 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415869 2574 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:52:42.417168 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415871 2574 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:52:42.417168 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415874 2574 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:52:42.417168 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415876 2574 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:52:42.417168 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415879 2574 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:52:42.417168 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415881 2574 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:52:42.417168 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415884 2574 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:52:42.417168 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415886 2574 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:52:42.417168 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415889 2574 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:52:42.417168 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415894 2574 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:52:42.417168 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415897 2574 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:52:42.417168 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415900 2574 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:52:42.417168 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415903 2574 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:52:42.417168 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415905 2574 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:52:42.417168 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415908 2574 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:52:42.417168 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415910 2574 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:52:42.417635 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415913 2574 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:52:42.417635 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415915 2574 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:52:42.417635 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415918 2574 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:52:42.417635 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415921 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:52:42.417635 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415923 2574 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:52:42.417635 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415926 2574 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:52:42.417635 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.415928 2574 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:52:42.417635 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417461 2574 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:52:42.417635 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417468 2574 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:52:42.417635 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417472 2574 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:52:42.417635 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417476 2574 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:52:42.417635 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417481 2574 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:52:42.417635 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417484 2574 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:52:42.417635 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417486 2574 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:52:42.417635 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417489 2574 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:52:42.417635 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417492 2574 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:52:42.417635 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417495 2574 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:52:42.417635 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417498 2574 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:52:42.417635 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417501 2574 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:52:42.417635 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417503 2574 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:52:42.418116 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417506 2574 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:52:42.418116 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417508 2574 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:52:42.418116 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417511 2574 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:52:42.418116 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417513 2574 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:52:42.418116 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417516 2574 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:52:42.418116 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417519 2574 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:52:42.418116 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417521 2574 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:52:42.418116 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417524 2574 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:52:42.418116 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417526 2574 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:52:42.418116 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417529 2574 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:52:42.418116 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417531 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:52:42.418116 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417533 2574 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:52:42.418116 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417536 2574 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:52:42.418116 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417539 2574 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:52:42.418116 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417542 2574 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:52:42.418116 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417544 2574 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:52:42.418116 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417547 2574 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:52:42.418116 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417550 2574 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:52:42.418116 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417552 2574 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:52:42.418116 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417555 2574 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:52:42.418658 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417557 2574 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:52:42.418658 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417560 2574 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:52:42.418658 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417562 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:52:42.418658 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417565 2574 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:52:42.418658 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417567 2574 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:52:42.418658 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417583 2574 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:52:42.418658 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417586 2574 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:52:42.418658 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417589 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:52:42.418658 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417592 2574 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:52:42.418658 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417594 2574 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:52:42.418658 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417598 2574 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:52:42.418658 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417602 2574 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:52:42.418658 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417605 2574 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:52:42.418658 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417608 2574 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:52:42.418658 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417611 2574 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:52:42.418658 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417614 2574 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:52:42.418658 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417616 2574 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:52:42.418658 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417619 2574 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:52:42.418658 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417622 2574 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:52:42.419120 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417624 2574 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:52:42.419120 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417627 2574 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:52:42.419120 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417629 2574 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:52:42.419120 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417632 2574 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:52:42.419120 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417636 2574 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:52:42.419120 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417639 2574 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:52:42.419120 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417642 2574 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:52:42.419120 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417645 2574 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:52:42.419120 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417648 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:52:42.419120 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417650 2574 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:52:42.419120 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417653 2574 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:52:42.419120 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417655 2574 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:52:42.419120 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417658 2574 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:52:42.419120 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417660 2574 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:52:42.419120 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417663 2574 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:52:42.419120 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417665 2574 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:52:42.419120 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417668 2574 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:52:42.419120 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417670 2574 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:52:42.419120 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417673 2574 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:52:42.419120 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417676 2574 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:52:42.419701 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417680 2574 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:52:42.419701 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417682 2574 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:52:42.419701 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417685 2574 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:52:42.419701 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417687 2574 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:52:42.419701 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417690 2574 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:52:42.419701 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417692 2574 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:52:42.419701 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417694 2574 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:52:42.419701 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417697 2574 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:52:42.419701 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417700 2574 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:52:42.419701 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417702 2574 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:52:42.419701 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417705 2574 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:52:42.419701 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417708 2574 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:52:42.419701 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417710 2574 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:52:42.419701 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.417712 2574 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:52:42.419701 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417790 2574 flags.go:64] FLAG: --address="0.0.0.0" Apr 23 17:52:42.419701 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417800 2574 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Apr 23 17:52:42.419701 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417810 2574 flags.go:64] FLAG: --anonymous-auth="true" Apr 23 17:52:42.419701 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417816 2574 flags.go:64] FLAG: --application-metrics-count-limit="100" Apr 23 17:52:42.419701 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417821 2574 flags.go:64] FLAG: --authentication-token-webhook="false" Apr 23 17:52:42.419701 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417824 2574 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Apr 23 17:52:42.419701 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417829 2574 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Apr 23 17:52:42.420213 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417833 2574 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Apr 23 17:52:42.420213 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417836 2574 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Apr 23 17:52:42.420213 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417840 2574 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Apr 23 17:52:42.420213 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417843 2574 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Apr 23 17:52:42.420213 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417846 2574 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Apr 23 17:52:42.420213 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417849 2574 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Apr 23 17:52:42.420213 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417852 2574 flags.go:64] FLAG: --cgroup-root="" Apr 23 17:52:42.420213 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417855 2574 flags.go:64] FLAG: --cgroups-per-qos="true" Apr 23 17:52:42.420213 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417858 2574 flags.go:64] FLAG: --client-ca-file="" Apr 23 17:52:42.420213 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417861 2574 flags.go:64] FLAG: --cloud-config="" Apr 23 17:52:42.420213 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417864 2574 flags.go:64] FLAG: --cloud-provider="external" Apr 23 17:52:42.420213 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417867 2574 flags.go:64] FLAG: --cluster-dns="[]" Apr 23 17:52:42.420213 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417870 2574 flags.go:64] FLAG: --cluster-domain="" Apr 23 17:52:42.420213 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417873 2574 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Apr 23 17:52:42.420213 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417876 2574 flags.go:64] FLAG: --config-dir="" Apr 23 17:52:42.420213 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417879 2574 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Apr 23 17:52:42.420213 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417882 2574 flags.go:64] FLAG: --container-log-max-files="5" Apr 23 17:52:42.420213 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417886 2574 flags.go:64] FLAG: --container-log-max-size="10Mi" Apr 23 17:52:42.420213 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417889 2574 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Apr 23 17:52:42.420213 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417892 2574 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Apr 23 17:52:42.420213 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417895 2574 flags.go:64] FLAG: --containerd-namespace="k8s.io" Apr 23 17:52:42.420213 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417899 2574 flags.go:64] FLAG: --contention-profiling="false" Apr 23 17:52:42.420213 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417902 2574 flags.go:64] FLAG: --cpu-cfs-quota="true" Apr 23 17:52:42.420213 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417905 2574 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Apr 23 17:52:42.420803 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417909 2574 flags.go:64] FLAG: --cpu-manager-policy="none" Apr 23 17:52:42.420803 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417911 2574 flags.go:64] FLAG: --cpu-manager-policy-options="" Apr 23 17:52:42.420803 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417916 2574 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Apr 23 17:52:42.420803 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417919 2574 flags.go:64] FLAG: --enable-controller-attach-detach="true" Apr 23 17:52:42.420803 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417922 2574 flags.go:64] FLAG: --enable-debugging-handlers="true" Apr 23 17:52:42.420803 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417925 2574 flags.go:64] FLAG: --enable-load-reader="false" Apr 23 17:52:42.420803 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417928 2574 flags.go:64] FLAG: --enable-server="true" Apr 23 17:52:42.420803 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417930 2574 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Apr 23 17:52:42.420803 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417935 2574 flags.go:64] FLAG: --event-burst="100" Apr 23 17:52:42.420803 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417938 2574 flags.go:64] FLAG: --event-qps="50" Apr 23 17:52:42.420803 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417942 2574 flags.go:64] FLAG: --event-storage-age-limit="default=0" Apr 23 17:52:42.420803 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417945 2574 flags.go:64] FLAG: --event-storage-event-limit="default=0" Apr 23 17:52:42.420803 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417948 2574 flags.go:64] FLAG: --eviction-hard="" Apr 23 17:52:42.420803 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417951 2574 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Apr 23 17:52:42.420803 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417955 2574 flags.go:64] FLAG: --eviction-minimum-reclaim="" Apr 23 17:52:42.420803 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417958 2574 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Apr 23 17:52:42.420803 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417961 2574 flags.go:64] FLAG: --eviction-soft="" Apr 23 17:52:42.420803 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417963 2574 flags.go:64] FLAG: --eviction-soft-grace-period="" Apr 23 17:52:42.420803 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417966 2574 flags.go:64] FLAG: --exit-on-lock-contention="false" Apr 23 17:52:42.420803 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417969 2574 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Apr 23 17:52:42.420803 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417972 2574 flags.go:64] FLAG: --experimental-mounter-path="" Apr 23 17:52:42.420803 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417975 2574 flags.go:64] FLAG: --fail-cgroupv1="false" Apr 23 17:52:42.420803 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417978 2574 flags.go:64] FLAG: --fail-swap-on="true" Apr 23 17:52:42.420803 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417981 2574 flags.go:64] FLAG: --feature-gates="" Apr 23 17:52:42.420803 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417989 2574 flags.go:64] FLAG: --file-check-frequency="20s" Apr 23 17:52:42.421399 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417992 2574 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Apr 23 17:52:42.421399 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417995 2574 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Apr 23 17:52:42.421399 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.417998 2574 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Apr 23 17:52:42.421399 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418001 2574 flags.go:64] FLAG: --healthz-port="10248" Apr 23 17:52:42.421399 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418005 2574 flags.go:64] FLAG: --help="false" Apr 23 17:52:42.421399 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418008 2574 flags.go:64] FLAG: --hostname-override="ip-10-0-142-106.ec2.internal" Apr 23 17:52:42.421399 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418012 2574 flags.go:64] FLAG: --housekeeping-interval="10s" Apr 23 17:52:42.421399 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418015 2574 flags.go:64] FLAG: --http-check-frequency="20s" Apr 23 17:52:42.421399 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418018 2574 flags.go:64] FLAG: --image-credential-provider-bin-dir="/usr/libexec/kubelet-image-credential-provider-plugins" Apr 23 17:52:42.421399 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418021 2574 flags.go:64] FLAG: --image-credential-provider-config="/etc/kubernetes/credential-providers/ecr-credential-provider.yaml" Apr 23 17:52:42.421399 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418025 2574 flags.go:64] FLAG: --image-gc-high-threshold="85" Apr 23 17:52:42.421399 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418028 2574 flags.go:64] FLAG: --image-gc-low-threshold="80" Apr 23 17:52:42.421399 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418031 2574 flags.go:64] FLAG: --image-service-endpoint="" Apr 23 17:52:42.421399 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418033 2574 flags.go:64] FLAG: --kernel-memcg-notification="false" Apr 23 17:52:42.421399 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418036 2574 flags.go:64] FLAG: --kube-api-burst="100" Apr 23 17:52:42.421399 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418039 2574 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Apr 23 17:52:42.421399 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418042 2574 flags.go:64] FLAG: --kube-api-qps="50" Apr 23 17:52:42.421399 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418045 2574 flags.go:64] FLAG: --kube-reserved="" Apr 23 17:52:42.421399 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418048 2574 flags.go:64] FLAG: --kube-reserved-cgroup="" Apr 23 17:52:42.421399 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418051 2574 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Apr 23 17:52:42.421399 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418054 2574 flags.go:64] FLAG: --kubelet-cgroups="" Apr 23 17:52:42.421399 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418057 2574 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Apr 23 17:52:42.421399 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418060 2574 flags.go:64] FLAG: --lock-file="" Apr 23 17:52:42.421399 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418062 2574 flags.go:64] FLAG: --log-cadvisor-usage="false" Apr 23 17:52:42.421995 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418065 2574 flags.go:64] FLAG: --log-flush-frequency="5s" Apr 23 17:52:42.421995 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418068 2574 flags.go:64] FLAG: --log-json-info-buffer-size="0" Apr 23 17:52:42.421995 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418074 2574 flags.go:64] FLAG: --log-json-split-stream="false" Apr 23 17:52:42.421995 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418077 2574 flags.go:64] FLAG: --log-text-info-buffer-size="0" Apr 23 17:52:42.421995 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418079 2574 flags.go:64] FLAG: --log-text-split-stream="false" Apr 23 17:52:42.421995 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418083 2574 flags.go:64] FLAG: --logging-format="text" Apr 23 17:52:42.421995 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418085 2574 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Apr 23 17:52:42.421995 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418089 2574 flags.go:64] FLAG: --make-iptables-util-chains="true" Apr 23 17:52:42.421995 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418092 2574 flags.go:64] FLAG: --manifest-url="" Apr 23 17:52:42.421995 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418095 2574 flags.go:64] FLAG: --manifest-url-header="" Apr 23 17:52:42.421995 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418099 2574 flags.go:64] FLAG: --max-housekeeping-interval="15s" Apr 23 17:52:42.421995 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418102 2574 flags.go:64] FLAG: --max-open-files="1000000" Apr 23 17:52:42.421995 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418107 2574 flags.go:64] FLAG: --max-pods="110" Apr 23 17:52:42.421995 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418110 2574 flags.go:64] FLAG: --maximum-dead-containers="-1" Apr 23 17:52:42.421995 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418113 2574 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Apr 23 17:52:42.421995 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418116 2574 flags.go:64] FLAG: --memory-manager-policy="None" Apr 23 17:52:42.421995 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418120 2574 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Apr 23 17:52:42.421995 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418123 2574 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Apr 23 17:52:42.421995 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418126 2574 flags.go:64] FLAG: --node-ip="0.0.0.0" Apr 23 17:52:42.421995 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418129 2574 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhel" Apr 23 17:52:42.421995 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418138 2574 flags.go:64] FLAG: --node-status-max-images="50" Apr 23 17:52:42.421995 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418141 2574 flags.go:64] FLAG: --node-status-update-frequency="10s" Apr 23 17:52:42.421995 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418144 2574 flags.go:64] FLAG: --oom-score-adj="-999" Apr 23 17:52:42.421995 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418147 2574 flags.go:64] FLAG: --pod-cidr="" Apr 23 17:52:42.422567 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418150 2574 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8cfe89231412ff3ee8cb6207fa0be33cad0f08e88c9c0f1e9f7e8c6f14d6715" Apr 23 17:52:42.422567 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418155 2574 flags.go:64] FLAG: --pod-manifest-path="" Apr 23 17:52:42.422567 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418158 2574 flags.go:64] FLAG: --pod-max-pids="-1" Apr 23 17:52:42.422567 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418161 2574 flags.go:64] FLAG: --pods-per-core="0" Apr 23 17:52:42.422567 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418164 2574 flags.go:64] FLAG: --port="10250" Apr 23 17:52:42.422567 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418167 2574 flags.go:64] FLAG: --protect-kernel-defaults="false" Apr 23 17:52:42.422567 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418170 2574 flags.go:64] FLAG: --provider-id="aws:///us-east-1a/i-0a37f0fd0fffff8aa" Apr 23 17:52:42.422567 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418173 2574 flags.go:64] FLAG: --qos-reserved="" Apr 23 17:52:42.422567 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418176 2574 flags.go:64] FLAG: --read-only-port="10255" Apr 23 17:52:42.422567 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418179 2574 flags.go:64] FLAG: --register-node="true" Apr 23 17:52:42.422567 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418182 2574 flags.go:64] FLAG: --register-schedulable="true" Apr 23 17:52:42.422567 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418185 2574 flags.go:64] FLAG: --register-with-taints="" Apr 23 17:52:42.422567 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418193 2574 flags.go:64] FLAG: --registry-burst="10" Apr 23 17:52:42.422567 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418195 2574 flags.go:64] FLAG: --registry-qps="5" Apr 23 17:52:42.422567 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418198 2574 flags.go:64] FLAG: --reserved-cpus="" Apr 23 17:52:42.422567 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418201 2574 flags.go:64] FLAG: --reserved-memory="" Apr 23 17:52:42.422567 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418204 2574 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Apr 23 17:52:42.422567 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418207 2574 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Apr 23 17:52:42.422567 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418210 2574 flags.go:64] FLAG: --rotate-certificates="false" Apr 23 17:52:42.422567 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418213 2574 flags.go:64] FLAG: --rotate-server-certificates="false" Apr 23 17:52:42.422567 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418216 2574 flags.go:64] FLAG: --runonce="false" Apr 23 17:52:42.422567 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418219 2574 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Apr 23 17:52:42.422567 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418222 2574 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Apr 23 17:52:42.422567 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418225 2574 flags.go:64] FLAG: --seccomp-default="false" Apr 23 17:52:42.422567 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418228 2574 flags.go:64] FLAG: --serialize-image-pulls="true" Apr 23 17:52:42.423177 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418231 2574 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Apr 23 17:52:42.423177 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418237 2574 flags.go:64] FLAG: --storage-driver-db="cadvisor" Apr 23 17:52:42.423177 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418240 2574 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Apr 23 17:52:42.423177 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418243 2574 flags.go:64] FLAG: --storage-driver-password="root" Apr 23 17:52:42.423177 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418246 2574 flags.go:64] FLAG: --storage-driver-secure="false" Apr 23 17:52:42.423177 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418249 2574 flags.go:64] FLAG: --storage-driver-table="stats" Apr 23 17:52:42.423177 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418252 2574 flags.go:64] FLAG: --storage-driver-user="root" Apr 23 17:52:42.423177 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418257 2574 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Apr 23 17:52:42.423177 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418260 2574 flags.go:64] FLAG: --sync-frequency="1m0s" Apr 23 17:52:42.423177 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418263 2574 flags.go:64] FLAG: --system-cgroups="" Apr 23 17:52:42.423177 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418266 2574 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Apr 23 17:52:42.423177 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418271 2574 flags.go:64] FLAG: --system-reserved-cgroup="" Apr 23 17:52:42.423177 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418274 2574 flags.go:64] FLAG: --tls-cert-file="" Apr 23 17:52:42.423177 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418277 2574 flags.go:64] FLAG: --tls-cipher-suites="[]" Apr 23 17:52:42.423177 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418281 2574 flags.go:64] FLAG: --tls-min-version="" Apr 23 17:52:42.423177 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418283 2574 flags.go:64] FLAG: --tls-private-key-file="" Apr 23 17:52:42.423177 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418286 2574 flags.go:64] FLAG: --topology-manager-policy="none" Apr 23 17:52:42.423177 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418289 2574 flags.go:64] FLAG: --topology-manager-policy-options="" Apr 23 17:52:42.423177 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418292 2574 flags.go:64] FLAG: --topology-manager-scope="container" Apr 23 17:52:42.423177 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418295 2574 flags.go:64] FLAG: --v="2" Apr 23 17:52:42.423177 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418299 2574 flags.go:64] FLAG: --version="false" Apr 23 17:52:42.423177 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418303 2574 flags.go:64] FLAG: --vmodule="" Apr 23 17:52:42.423177 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418307 2574 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Apr 23 17:52:42.423177 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.418311 2574 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Apr 23 17:52:42.423177 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418407 2574 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:52:42.423838 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418412 2574 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:52:42.423838 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418415 2574 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:52:42.423838 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418419 2574 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:52:42.423838 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418423 2574 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:52:42.423838 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418425 2574 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:52:42.423838 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418428 2574 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:52:42.423838 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418431 2574 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:52:42.423838 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418433 2574 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:52:42.423838 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418436 2574 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:52:42.423838 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418439 2574 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:52:42.423838 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418443 2574 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:52:42.423838 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418446 2574 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:52:42.423838 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418450 2574 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:52:42.423838 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418452 2574 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:52:42.423838 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418457 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:52:42.423838 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418460 2574 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:52:42.423838 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418462 2574 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:52:42.423838 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418465 2574 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:52:42.423838 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418468 2574 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:52:42.423838 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418471 2574 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:52:42.424356 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418474 2574 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:52:42.424356 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418477 2574 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:52:42.424356 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418480 2574 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:52:42.424356 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418482 2574 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:52:42.424356 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418486 2574 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:52:42.424356 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418488 2574 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:52:42.424356 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418491 2574 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:52:42.424356 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418493 2574 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:52:42.424356 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418496 2574 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:52:42.424356 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418498 2574 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:52:42.424356 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418501 2574 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:52:42.424356 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418504 2574 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:52:42.424356 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418506 2574 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:52:42.424356 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418509 2574 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:52:42.424356 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418511 2574 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:52:42.424356 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418514 2574 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:52:42.424356 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418517 2574 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:52:42.424356 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418519 2574 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:52:42.424356 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418522 2574 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:52:42.425032 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418524 2574 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:52:42.425032 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418527 2574 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:52:42.425032 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418530 2574 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:52:42.425032 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418532 2574 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:52:42.425032 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418535 2574 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:52:42.425032 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418538 2574 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:52:42.425032 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418540 2574 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:52:42.425032 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418544 2574 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:52:42.425032 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418547 2574 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:52:42.425032 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418550 2574 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:52:42.425032 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418552 2574 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:52:42.425032 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418555 2574 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:52:42.425032 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418557 2574 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:52:42.425032 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418559 2574 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:52:42.425032 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418562 2574 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:52:42.425032 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418565 2574 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:52:42.425032 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418567 2574 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:52:42.425032 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418583 2574 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:52:42.425032 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418586 2574 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:52:42.425032 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418589 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:52:42.425833 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418591 2574 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:52:42.425833 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418594 2574 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:52:42.425833 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418596 2574 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:52:42.425833 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418599 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:52:42.425833 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418601 2574 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:52:42.425833 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418604 2574 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:52:42.425833 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418606 2574 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:52:42.425833 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418609 2574 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:52:42.425833 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418615 2574 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:52:42.425833 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418618 2574 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:52:42.425833 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418621 2574 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:52:42.425833 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418623 2574 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:52:42.425833 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418626 2574 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:52:42.425833 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418628 2574 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:52:42.425833 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418631 2574 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:52:42.425833 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418634 2574 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:52:42.425833 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418636 2574 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:52:42.425833 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418639 2574 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:52:42.425833 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418641 2574 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:52:42.425833 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418646 2574 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:52:42.426321 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418648 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:52:42.426321 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418651 2574 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:52:42.426321 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418654 2574 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:52:42.426321 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418656 2574 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:52:42.426321 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418659 2574 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:52:42.426321 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.418662 2574 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:52:42.426321 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.419342 2574 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 23 17:52:42.426321 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.426317 2574 server.go:530] "Kubelet version" kubeletVersion="v1.33.9" Apr 23 17:52:42.426528 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.426331 2574 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 23 17:52:42.426528 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426378 2574 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:52:42.426528 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426382 2574 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:52:42.426528 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426385 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:52:42.426528 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426389 2574 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:52:42.426528 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426392 2574 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:52:42.426528 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426395 2574 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:52:42.426528 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426398 2574 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:52:42.426528 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426400 2574 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:52:42.426528 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426402 2574 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:52:42.426528 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426405 2574 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:52:42.426528 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426408 2574 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:52:42.426528 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426410 2574 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:52:42.426528 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426413 2574 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:52:42.426528 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426416 2574 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:52:42.426528 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426418 2574 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:52:42.426528 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426421 2574 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:52:42.426528 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426423 2574 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:52:42.426528 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426426 2574 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:52:42.427003 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426428 2574 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:52:42.427003 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426431 2574 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:52:42.427003 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426433 2574 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:52:42.427003 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426436 2574 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:52:42.427003 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426439 2574 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:52:42.427003 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426441 2574 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:52:42.427003 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426444 2574 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:52:42.427003 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426446 2574 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:52:42.427003 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426449 2574 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:52:42.427003 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426452 2574 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:52:42.427003 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426455 2574 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:52:42.427003 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426457 2574 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:52:42.427003 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426459 2574 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:52:42.427003 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426462 2574 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:52:42.427003 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426466 2574 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:52:42.427003 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426468 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:52:42.427003 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426471 2574 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:52:42.427003 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426474 2574 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:52:42.427003 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426476 2574 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:52:42.427003 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426479 2574 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:52:42.427500 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426481 2574 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:52:42.427500 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426484 2574 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:52:42.427500 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426486 2574 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:52:42.427500 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426489 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:52:42.427500 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426492 2574 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:52:42.427500 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426495 2574 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:52:42.427500 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426497 2574 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:52:42.427500 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426500 2574 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:52:42.427500 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426502 2574 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:52:42.427500 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426505 2574 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:52:42.427500 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426508 2574 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:52:42.427500 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426510 2574 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:52:42.427500 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426513 2574 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:52:42.427500 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426516 2574 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:52:42.427500 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426518 2574 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:52:42.427500 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426521 2574 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:52:42.427500 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426523 2574 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:52:42.427500 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426526 2574 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:52:42.427500 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426528 2574 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:52:42.427500 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426531 2574 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:52:42.428027 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426534 2574 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:52:42.428027 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426538 2574 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:52:42.428027 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426543 2574 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:52:42.428027 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426546 2574 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:52:42.428027 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426549 2574 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:52:42.428027 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426552 2574 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:52:42.428027 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426555 2574 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:52:42.428027 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426558 2574 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:52:42.428027 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426561 2574 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:52:42.428027 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426564 2574 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:52:42.428027 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426568 2574 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:52:42.428027 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426586 2574 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:52:42.428027 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426589 2574 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:52:42.428027 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426592 2574 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:52:42.428027 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426595 2574 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:52:42.428027 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426597 2574 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:52:42.428027 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426600 2574 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:52:42.428027 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426602 2574 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:52:42.428027 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426606 2574 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:52:42.428493 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426608 2574 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:52:42.428493 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426611 2574 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:52:42.428493 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426614 2574 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:52:42.428493 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426616 2574 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:52:42.428493 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426619 2574 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:52:42.428493 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426621 2574 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:52:42.428493 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426624 2574 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:52:42.428493 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426626 2574 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:52:42.428493 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426629 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:52:42.428493 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.426634 2574 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 23 17:52:42.428493 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426727 2574 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:52:42.428493 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426732 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:52:42.428493 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426735 2574 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:52:42.428493 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426737 2574 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:52:42.428493 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426740 2574 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:52:42.428493 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426742 2574 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:52:42.428897 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426745 2574 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:52:42.428897 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426747 2574 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:52:42.428897 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426750 2574 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:52:42.428897 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426753 2574 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:52:42.428897 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426755 2574 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:52:42.428897 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426758 2574 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:52:42.428897 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426761 2574 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:52:42.428897 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426764 2574 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:52:42.428897 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426766 2574 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:52:42.428897 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426769 2574 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:52:42.428897 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426771 2574 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:52:42.428897 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426774 2574 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:52:42.428897 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426776 2574 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:52:42.428897 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426779 2574 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:52:42.428897 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426781 2574 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:52:42.428897 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426784 2574 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:52:42.428897 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426787 2574 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:52:42.428897 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426789 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:52:42.428897 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426792 2574 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:52:42.428897 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426794 2574 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:52:42.429382 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426797 2574 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:52:42.429382 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426799 2574 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:52:42.429382 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426802 2574 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:52:42.429382 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426805 2574 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:52:42.429382 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426807 2574 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:52:42.429382 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426810 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:52:42.429382 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426813 2574 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:52:42.429382 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426815 2574 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:52:42.429382 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426818 2574 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:52:42.429382 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426821 2574 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:52:42.429382 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426823 2574 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:52:42.429382 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426826 2574 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:52:42.429382 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426829 2574 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:52:42.429382 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426832 2574 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:52:42.429382 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426835 2574 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:52:42.429382 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426838 2574 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:52:42.429382 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426841 2574 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:52:42.429382 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426843 2574 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:52:42.429382 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426846 2574 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:52:42.429382 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426848 2574 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:52:42.429893 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426850 2574 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:52:42.429893 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426853 2574 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:52:42.429893 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426855 2574 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:52:42.429893 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426858 2574 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:52:42.429893 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426860 2574 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:52:42.429893 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426863 2574 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:52:42.429893 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426865 2574 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:52:42.429893 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426868 2574 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:52:42.429893 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426870 2574 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:52:42.429893 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426874 2574 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:52:42.429893 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426876 2574 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:52:42.429893 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426879 2574 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:52:42.429893 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426881 2574 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:52:42.429893 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426883 2574 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:52:42.429893 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426886 2574 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:52:42.429893 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426888 2574 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:52:42.429893 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426891 2574 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:52:42.429893 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426893 2574 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:52:42.429893 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426896 2574 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:52:42.430353 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426898 2574 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:52:42.430353 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426901 2574 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:52:42.430353 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426903 2574 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:52:42.430353 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426906 2574 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:52:42.430353 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426908 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:52:42.430353 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426911 2574 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:52:42.430353 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426913 2574 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:52:42.430353 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426916 2574 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:52:42.430353 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426919 2574 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:52:42.430353 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426922 2574 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:52:42.430353 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426924 2574 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:52:42.430353 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426927 2574 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:52:42.430353 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426929 2574 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:52:42.430353 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426932 2574 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:52:42.430353 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426935 2574 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:52:42.430353 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426937 2574 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:52:42.430353 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426940 2574 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:52:42.430353 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426942 2574 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:52:42.430353 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426945 2574 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:52:42.430353 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426949 2574 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:52:42.430842 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:42.426952 2574 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:52:42.430842 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.426957 2574 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 23 17:52:42.430842 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.427624 2574 server.go:962] "Client rotation is on, will bootstrap in background" Apr 23 17:52:42.430842 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.430254 2574 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Apr 23 17:52:42.431364 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.431351 2574 server.go:1019] "Starting client certificate rotation" Apr 23 17:52:42.431466 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.431452 2574 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 23 17:52:42.431498 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.431485 2574 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 23 17:52:42.455120 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.455102 2574 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 23 17:52:42.457404 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.457390 2574 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 23 17:52:42.465209 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.465195 2574 log.go:25] "Validated CRI v1 runtime API" Apr 23 17:52:42.470634 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.470617 2574 log.go:25] "Validated CRI v1 image API" Apr 23 17:52:42.471766 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.471753 2574 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 23 17:52:42.475533 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.475507 2574 fs.go:135] Filesystem UUIDs: map[2ea40357-88b6-454d-bbd9-ba0d588b2631:/dev/nvme0n1p4 7B77-95E7:/dev/nvme0n1p2 ea61f368-1610-4e97-b458-6f30f71b3412:/dev/nvme0n1p3] Apr 23 17:52:42.475625 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.475531 2574 fs.go:136] Filesystem partitions: map[/dev/nvme0n1p3:{mountpoint:/boot major:259 minor:3 fsType:ext4 blockSize:0} /dev/nvme0n1p4:{mountpoint:/var major:259 minor:4 fsType:xfs blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Apr 23 17:52:42.480923 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.480812 2574 manager.go:217] Machine: {Timestamp:2026-04-23 17:52:42.479689866 +0000 UTC m=+0.377143630 CPUVendorID:GenuineIntel NumCores:8 NumPhysicalCores:4 NumSockets:1 CpuFrequency:3092892 MemoryCapacity:33164488704 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ec26b3766aeb2daf23804e7b85c25184 SystemUUID:ec26b376-6aeb-2daf-2380-4e7b85c25184 BootID:165eb9cf-aff9-4cb6-8d12-0f9a5b46a803 Filesystems:[{Device:/dev/nvme0n1p4 DeviceMajor:259 DeviceMinor:4 Capacity:128243970048 Type:vfs Inodes:62651840 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6103040 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16582246400 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/nvme0n1p3 DeviceMajor:259 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16582242304 Type:vfs Inodes:4048399 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6632898560 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[259:0:{Name:nvme0n1 Major:259 Minor:0 Size:128849018880 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:02:a9:ff:9a:96:61 Speed:0 Mtu:9001} {Name:ens5 MacAddress:02:a9:ff:9a:96:61 Speed:0 Mtu:9001} {Name:ovs-system MacAddress:fe:0b:38:6b:3b:6f Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33164488704 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0 4] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:1 Threads:[1 5] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:2 Threads:[2 6] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:3 Threads:[3 7] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:}] Caches:[{Id:0 Size:37486592 Type:Unified Level:3}] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Apr 23 17:52:42.480923 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.480910 2574 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Apr 23 17:52:42.481060 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.481007 2574 manager.go:233] Version: {KernelVersion:5.14.0-570.107.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20260414-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Apr 23 17:52:42.481850 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.481833 2574 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 23 17:52:42.483102 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.483080 2574 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 23 17:52:42.483234 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.483104 2574 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-10-0-142-106.ec2.internal","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 23 17:52:42.483276 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.483242 2574 topology_manager.go:138] "Creating topology manager with none policy" Apr 23 17:52:42.483276 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.483250 2574 container_manager_linux.go:306] "Creating device plugin manager" Apr 23 17:52:42.483276 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.483264 2574 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 23 17:52:42.483367 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.483279 2574 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 23 17:52:42.483960 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.483950 2574 state_mem.go:36] "Initialized new in-memory state store" Apr 23 17:52:42.484190 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.484181 2574 server.go:1267] "Using root directory" path="/var/lib/kubelet" Apr 23 17:52:42.487087 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.487076 2574 kubelet.go:491] "Attempting to sync node with API server" Apr 23 17:52:42.487128 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.487091 2574 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 23 17:52:42.487682 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.487672 2574 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Apr 23 17:52:42.487716 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.487685 2574 kubelet.go:397] "Adding apiserver pod source" Apr 23 17:52:42.487716 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.487694 2574 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 23 17:52:42.489087 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.489074 2574 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 23 17:52:42.489137 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.489095 2574 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 23 17:52:42.491743 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.491729 2574 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.10-2.rhaos4.20.gita4d0894.el9" apiVersion="v1" Apr 23 17:52:42.493281 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.493263 2574 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 23 17:52:42.494490 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.494476 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Apr 23 17:52:42.494557 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.494498 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Apr 23 17:52:42.494557 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.494509 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Apr 23 17:52:42.494557 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.494517 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Apr 23 17:52:42.494557 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.494525 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Apr 23 17:52:42.494557 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.494533 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Apr 23 17:52:42.494557 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.494543 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Apr 23 17:52:42.494557 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.494551 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Apr 23 17:52:42.494814 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.494561 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Apr 23 17:52:42.494814 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.494585 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Apr 23 17:52:42.494814 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.494598 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Apr 23 17:52:42.494814 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.494611 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Apr 23 17:52:42.495459 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.495448 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Apr 23 17:52:42.495512 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.495468 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Apr 23 17:52:42.498783 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.498768 2574 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 23 17:52:42.498856 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.498809 2574 server.go:1295] "Started kubelet" Apr 23 17:52:42.498907 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.498884 2574 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 23 17:52:42.498987 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.498943 2574 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 23 17:52:42.499041 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.498998 2574 server_v1.go:47] "podresources" method="list" useActivePods=true Apr 23 17:52:42.499535 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.499512 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:52:42.499688 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.499653 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:42.499855 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.499730 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:52:42.500068 ip-10-0-142-106 systemd[1]: Started Kubernetes Kubelet. Apr 23 17:52:42.500448 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.500432 2574 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 23 17:52:42.500784 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.500640 2574 server.go:317] "Adding debug handlers to kubelet server" Apr 23 17:52:42.504451 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.504435 2574 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Apr 23 17:52:42.505058 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.505046 2574 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 23 17:52:42.506604 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.506459 2574 volume_manager.go:295] "The desired_state_of_world populator starts" Apr 23 17:52:42.506740 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.506729 2574 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 23 17:52:42.506852 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.506817 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:52:42.506988 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.506975 2574 reconstruct.go:97] "Volume reconstruction finished" Apr 23 17:52:42.507043 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.506992 2574 reconciler.go:26] "Reconciler: start to sync state" Apr 23 17:52:42.507713 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.507696 2574 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 23 17:52:42.510200 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.509007 2574 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-4h84b" Apr 23 17:52:42.510200 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.509622 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Apr 23 17:52:42.510200 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.509956 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:52:42.512115 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.512087 2574 factory.go:55] Registering systemd factory Apr 23 17:52:42.512115 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.512107 2574 factory.go:223] Registration of the systemd container factory successfully Apr 23 17:52:42.512372 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.512358 2574 factory.go:153] Registering CRI-O factory Apr 23 17:52:42.512431 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.512386 2574 factory.go:223] Registration of the crio container factory successfully Apr 23 17:52:42.512496 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.512482 2574 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Apr 23 17:52:42.512552 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.512516 2574 factory.go:103] Registering Raw factory Apr 23 17:52:42.512552 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.512537 2574 manager.go:1196] Started watching for new ooms in manager Apr 23 17:52:42.512967 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.512950 2574 manager.go:319] Starting recovery of all containers Apr 23 17:52:42.513478 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.509693 2574 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-106.ec2.internal.18a90ddb106a6dc4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-106.ec2.internal,UID:ip-10-0-142-106.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:42.498780612 +0000 UTC m=+0.396234377,LastTimestamp:2026-04-23 17:52:42.498780612 +0000 UTC m=+0.396234377,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:42.514519 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.514486 2574 kubelet.go:1618] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Apr 23 17:52:42.523367 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.523348 2574 manager.go:324] Recovery completed Apr 23 17:52:42.526986 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.526973 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:42.529150 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.529135 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:42.529225 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.529160 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:42.529225 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.529176 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:42.529635 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.529623 2574 cpu_manager.go:222] "Starting CPU manager" policy="none" Apr 23 17:52:42.529635 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.529632 2574 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Apr 23 17:52:42.529718 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.529646 2574 state_mem.go:36] "Initialized new in-memory state store" Apr 23 17:52:42.530845 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.530785 2574 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-106.ec2.internal.18a90ddb1239d055 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-106.ec2.internal,UID:ip-10-0-142-106.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-142-106.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:42.529149013 +0000 UTC m=+0.426602774,LastTimestamp:2026-04-23 17:52:42.529149013 +0000 UTC m=+0.426602774,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:42.531616 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.531603 2574 policy_none.go:49] "None policy: Start" Apr 23 17:52:42.531670 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.531619 2574 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 23 17:52:42.531670 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.531630 2574 state_mem.go:35] "Initializing new in-memory state store" Apr 23 17:52:42.540945 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.540869 2574 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-106.ec2.internal.18a90ddb123a0dc5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-106.ec2.internal,UID:ip-10-0-142-106.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-142-106.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:42.529164741 +0000 UTC m=+0.426618501,LastTimestamp:2026-04-23 17:52:42.529164741 +0000 UTC m=+0.426618501,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:42.547790 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.547705 2574 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-106.ec2.internal.18a90ddb123a4ca3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-106.ec2.internal,UID:ip-10-0-142-106.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-142-106.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:42.529180835 +0000 UTC m=+0.426634596,LastTimestamp:2026-04-23 17:52:42.529180835 +0000 UTC m=+0.426634596,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:42.572118 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.572104 2574 manager.go:341] "Starting Device Plugin manager" Apr 23 17:52:42.581094 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.572160 2574 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 23 17:52:42.581094 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.572169 2574 server.go:85] "Starting device plugin registration server" Apr 23 17:52:42.581094 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.572344 2574 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 23 17:52:42.581094 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.572375 2574 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 23 17:52:42.581094 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.572422 2574 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Apr 23 17:52:42.581094 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.572619 2574 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Apr 23 17:52:42.581094 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.572631 2574 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 23 17:52:42.581094 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.572973 2574 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Apr 23 17:52:42.581094 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.573014 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:52:42.584447 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.584393 2574 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-106.ec2.internal.18a90ddb14f21577 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-106.ec2.internal,UID:ip-10-0-142-106.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:42.574779767 +0000 UTC m=+0.472233519,LastTimestamp:2026-04-23 17:52:42.574779767 +0000 UTC m=+0.472233519,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:42.639171 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.639140 2574 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 23 17:52:42.640367 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.640350 2574 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 23 17:52:42.640429 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.640381 2574 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 23 17:52:42.640429 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.640400 2574 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 23 17:52:42.640429 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.640409 2574 kubelet.go:2451] "Starting kubelet main sync loop" Apr 23 17:52:42.640540 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.640446 2574 kubelet.go:2475] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Apr 23 17:52:42.647536 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.647515 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 17:52:42.672780 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.672739 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:42.673515 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.673501 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:42.673590 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.673526 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:42.673590 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.673546 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:42.673590 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.673587 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-142-106.ec2.internal" Apr 23 17:52:42.681643 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.681524 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-106.ec2.internal.18a90ddb1239d055\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-106.ec2.internal.18a90ddb1239d055 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-106.ec2.internal,UID:ip-10-0-142-106.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-142-106.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:42.529149013 +0000 UTC m=+0.426602774,LastTimestamp:2026-04-23 17:52:42.673513624 +0000 UTC m=+0.570967385,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:42.684942 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.684869 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-106.ec2.internal.18a90ddb123a0dc5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-106.ec2.internal.18a90ddb123a0dc5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-106.ec2.internal,UID:ip-10-0-142-106.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-142-106.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:42.529164741 +0000 UTC m=+0.426618501,LastTimestamp:2026-04-23 17:52:42.673532757 +0000 UTC m=+0.570986519,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:42.685028 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.684948 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-142-106.ec2.internal" Apr 23 17:52:42.690741 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.690679 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-106.ec2.internal.18a90ddb123a4ca3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-106.ec2.internal.18a90ddb123a4ca3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-106.ec2.internal,UID:ip-10-0-142-106.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-142-106.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:42.529180835 +0000 UTC m=+0.426634596,LastTimestamp:2026-04-23 17:52:42.673551377 +0000 UTC m=+0.571005137,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:42.712558 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.712535 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Apr 23 17:52:42.740781 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.740758 2574 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal","kube-system/kube-apiserver-proxy-ip-10-0-142-106.ec2.internal"] Apr 23 17:52:42.740864 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.740818 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:42.742144 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.742127 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:42.742201 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.742156 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:42.742201 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.742165 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:42.743134 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.743120 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:42.743284 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.743269 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal" Apr 23 17:52:42.743337 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.743306 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:42.743789 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.743775 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:42.743789 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.743782 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:42.743896 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.743800 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:42.743896 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.743807 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:42.743896 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.743814 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:42.743896 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.743845 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:42.746293 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.746274 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-142-106.ec2.internal" Apr 23 17:52:42.746384 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.746304 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:42.747002 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.746986 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:42.747109 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.747011 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:42.747109 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.747025 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:42.752394 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.752334 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-106.ec2.internal.18a90ddb1239d055\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-106.ec2.internal.18a90ddb1239d055 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-106.ec2.internal,UID:ip-10-0-142-106.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-142-106.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:42.529149013 +0000 UTC m=+0.426602774,LastTimestamp:2026-04-23 17:52:42.742144697 +0000 UTC m=+0.639598458,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:42.759123 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.759070 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-106.ec2.internal.18a90ddb123a0dc5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-106.ec2.internal.18a90ddb123a0dc5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-106.ec2.internal,UID:ip-10-0-142-106.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-142-106.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:42.529164741 +0000 UTC m=+0.426618501,LastTimestamp:2026-04-23 17:52:42.742160794 +0000 UTC m=+0.639614555,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:42.767711 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.767652 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-106.ec2.internal.18a90ddb123a4ca3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-106.ec2.internal.18a90ddb123a4ca3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-106.ec2.internal,UID:ip-10-0-142-106.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-142-106.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:42.529180835 +0000 UTC m=+0.426634596,LastTimestamp:2026-04-23 17:52:42.742169444 +0000 UTC m=+0.639623206,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:42.773465 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.773448 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-142-106.ec2.internal\" not found" node="ip-10-0-142-106.ec2.internal" Apr 23 17:52:42.774730 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.774658 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-106.ec2.internal.18a90ddb1239d055\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-106.ec2.internal.18a90ddb1239d055 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-106.ec2.internal,UID:ip-10-0-142-106.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-142-106.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:42.529149013 +0000 UTC m=+0.426602774,LastTimestamp:2026-04-23 17:52:42.743788139 +0000 UTC m=+0.641241903,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:42.777724 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.777710 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-142-106.ec2.internal\" not found" node="ip-10-0-142-106.ec2.internal" Apr 23 17:52:42.783862 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.783811 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-106.ec2.internal.18a90ddb1239d055\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-106.ec2.internal.18a90ddb1239d055 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-106.ec2.internal,UID:ip-10-0-142-106.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-142-106.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:42.529149013 +0000 UTC m=+0.426602774,LastTimestamp:2026-04-23 17:52:42.743798394 +0000 UTC m=+0.641252155,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:42.790398 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.790340 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-106.ec2.internal.18a90ddb123a0dc5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-106.ec2.internal.18a90ddb123a0dc5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-106.ec2.internal,UID:ip-10-0-142-106.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-142-106.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:42.529164741 +0000 UTC m=+0.426618501,LastTimestamp:2026-04-23 17:52:42.743808084 +0000 UTC m=+0.641261845,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:42.800387 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.800328 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-106.ec2.internal.18a90ddb123a4ca3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-106.ec2.internal.18a90ddb123a4ca3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-106.ec2.internal,UID:ip-10-0-142-106.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-142-106.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:42.529180835 +0000 UTC m=+0.426634596,LastTimestamp:2026-04-23 17:52:42.743818137 +0000 UTC m=+0.641271898,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:42.806606 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.806534 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-106.ec2.internal.18a90ddb123a0dc5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-106.ec2.internal.18a90ddb123a0dc5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-106.ec2.internal,UID:ip-10-0-142-106.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-142-106.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:42.529164741 +0000 UTC m=+0.426618501,LastTimestamp:2026-04-23 17:52:42.743832171 +0000 UTC m=+0.641285932,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:42.809229 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.809211 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/2e1ed6752f88ed3103e33f18a9adc980-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal\" (UID: \"2e1ed6752f88ed3103e33f18a9adc980\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal" Apr 23 17:52:42.809303 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.809235 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2e1ed6752f88ed3103e33f18a9adc980-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal\" (UID: \"2e1ed6752f88ed3103e33f18a9adc980\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal" Apr 23 17:52:42.809303 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.809250 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/bad58ef41963d01887fbfb46c2febb18-config\") pod \"kube-apiserver-proxy-ip-10-0-142-106.ec2.internal\" (UID: \"bad58ef41963d01887fbfb46c2febb18\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-142-106.ec2.internal" Apr 23 17:52:42.815201 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.815149 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-106.ec2.internal.18a90ddb123a4ca3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-106.ec2.internal.18a90ddb123a4ca3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-106.ec2.internal,UID:ip-10-0-142-106.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-142-106.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:42.529180835 +0000 UTC m=+0.426634596,LastTimestamp:2026-04-23 17:52:42.743849807 +0000 UTC m=+0.641303567,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:42.825164 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.825106 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-106.ec2.internal.18a90ddb1239d055\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-106.ec2.internal.18a90ddb1239d055 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-106.ec2.internal,UID:ip-10-0-142-106.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-142-106.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:42.529149013 +0000 UTC m=+0.426602774,LastTimestamp:2026-04-23 17:52:42.747000891 +0000 UTC m=+0.644454652,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:42.834283 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.834226 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-106.ec2.internal.18a90ddb123a0dc5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-106.ec2.internal.18a90ddb123a0dc5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-106.ec2.internal,UID:ip-10-0-142-106.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-142-106.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:42.529164741 +0000 UTC m=+0.426618501,LastTimestamp:2026-04-23 17:52:42.747017921 +0000 UTC m=+0.644471684,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:42.841351 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.841297 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-106.ec2.internal.18a90ddb123a4ca3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-106.ec2.internal.18a90ddb123a4ca3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-106.ec2.internal,UID:ip-10-0-142-106.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-142-106.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:42.529180835 +0000 UTC m=+0.426634596,LastTimestamp:2026-04-23 17:52:42.747030951 +0000 UTC m=+0.644484715,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:42.885450 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.885432 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:42.886092 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.886079 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:42.886144 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.886104 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:42.886144 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.886116 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:42.886144 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.886137 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-142-106.ec2.internal" Apr 23 17:52:42.894868 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.894807 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-106.ec2.internal.18a90ddb1239d055\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-106.ec2.internal.18a90ddb1239d055 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-106.ec2.internal,UID:ip-10-0-142-106.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-142-106.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:42.529149013 +0000 UTC m=+0.426602774,LastTimestamp:2026-04-23 17:52:42.886092183 +0000 UTC m=+0.783545949,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:42.901070 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.901054 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-142-106.ec2.internal" Apr 23 17:52:42.901271 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.901207 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-106.ec2.internal.18a90ddb123a0dc5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-106.ec2.internal.18a90ddb123a0dc5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-106.ec2.internal,UID:ip-10-0-142-106.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-142-106.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:42.529164741 +0000 UTC m=+0.426618501,LastTimestamp:2026-04-23 17:52:42.886111297 +0000 UTC m=+0.783565058,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:42.910204 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.910184 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/bad58ef41963d01887fbfb46c2febb18-config\") pod \"kube-apiserver-proxy-ip-10-0-142-106.ec2.internal\" (UID: \"bad58ef41963d01887fbfb46c2febb18\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-142-106.ec2.internal" Apr 23 17:52:42.910282 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.910206 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/2e1ed6752f88ed3103e33f18a9adc980-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal\" (UID: \"2e1ed6752f88ed3103e33f18a9adc980\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal" Apr 23 17:52:42.910282 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.910222 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2e1ed6752f88ed3103e33f18a9adc980-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal\" (UID: \"2e1ed6752f88ed3103e33f18a9adc980\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal" Apr 23 17:52:42.910282 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.910274 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2e1ed6752f88ed3103e33f18a9adc980-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal\" (UID: \"2e1ed6752f88ed3103e33f18a9adc980\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal" Apr 23 17:52:42.910374 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.910290 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/2e1ed6752f88ed3103e33f18a9adc980-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal\" (UID: \"2e1ed6752f88ed3103e33f18a9adc980\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal" Apr 23 17:52:42.910374 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:42.910290 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/bad58ef41963d01887fbfb46c2febb18-config\") pod \"kube-apiserver-proxy-ip-10-0-142-106.ec2.internal\" (UID: \"bad58ef41963d01887fbfb46c2febb18\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-142-106.ec2.internal" Apr 23 17:52:42.913560 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:42.913507 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-106.ec2.internal.18a90ddb123a4ca3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-106.ec2.internal.18a90ddb123a4ca3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-106.ec2.internal,UID:ip-10-0-142-106.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-142-106.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:42.529180835 +0000 UTC m=+0.426634596,LastTimestamp:2026-04-23 17:52:42.886120577 +0000 UTC m=+0.783574338,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:43.075388 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:43.075369 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal" Apr 23 17:52:43.079724 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:43.079707 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-142-106.ec2.internal" Apr 23 17:52:43.120444 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:43.120425 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Apr 23 17:52:43.302167 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:43.302142 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:43.303295 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:43.303279 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:43.303400 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:43.303322 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:43.303400 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:43.303337 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:43.303400 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:43.303369 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-142-106.ec2.internal" Apr 23 17:52:43.312371 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:43.312302 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-106.ec2.internal.18a90ddb1239d055\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-106.ec2.internal.18a90ddb1239d055 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-106.ec2.internal,UID:ip-10-0-142-106.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-142-106.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:42.529149013 +0000 UTC m=+0.426602774,LastTimestamp:2026-04-23 17:52:43.303295821 +0000 UTC m=+1.200749583,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:43.319145 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:43.319123 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-142-106.ec2.internal" Apr 23 17:52:43.319253 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:43.319193 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-142-106.ec2.internal.18a90ddb123a0dc5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-106.ec2.internal.18a90ddb123a0dc5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-106.ec2.internal,UID:ip-10-0-142-106.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-142-106.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:42.529164741 +0000 UTC m=+0.426618501,LastTimestamp:2026-04-23 17:52:43.303329639 +0000 UTC m=+1.200783401,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:43.396148 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:43.396091 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:52:43.509044 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:43.509024 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:43.532818 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:43.532786 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e1ed6752f88ed3103e33f18a9adc980.slice/crio-1418abb5dbaed46c46c1772eee21c99fb490644c46959a347c649f1aea5a27c1 WatchSource:0}: Error finding container 1418abb5dbaed46c46c1772eee21c99fb490644c46959a347c649f1aea5a27c1: Status 404 returned error can't find the container with id 1418abb5dbaed46c46c1772eee21c99fb490644c46959a347c649f1aea5a27c1 Apr 23 17:52:43.537686 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:43.537670 2574 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 17:52:43.547100 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:43.546994 2574 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddb4e59fcc3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal,UID:2e1ed6752f88ed3103e33f18a9adc980,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\",Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:43.537890499 +0000 UTC m=+1.435344246,LastTimestamp:2026-04-23 17:52:43.537890499 +0000 UTC m=+1.435344246,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:43.552939 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:52:43.552918 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbad58ef41963d01887fbfb46c2febb18.slice/crio-8125b4494d5398e7662fc4f81de4f25c47b9783f118aac1f64c8efa32fb2bb69 WatchSource:0}: Error finding container 8125b4494d5398e7662fc4f81de4f25c47b9783f118aac1f64c8efa32fb2bb69: Status 404 returned error can't find the container with id 8125b4494d5398e7662fc4f81de4f25c47b9783f118aac1f64c8efa32fb2bb69 Apr 23 17:52:43.561126 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:43.561064 2574 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{kube-apiserver-proxy-ip-10-0-142-106.ec2.internal.18a90ddb4f529d2e kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-proxy-ip-10-0-142-106.ec2.internal,UID:bad58ef41963d01887fbfb46c2febb18,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{haproxy},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:294af5c64228434d1ed6ee8ea3ac802e3c999aa847223e3b2efa18425a9fe421\",Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:43.554184494 +0000 UTC m=+1.451638245,LastTimestamp:2026-04-23 17:52:43.554184494 +0000 UTC m=+1.451638245,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:43.621395 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:43.621365 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:52:43.643336 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:43.643289 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-142-106.ec2.internal" event={"ID":"bad58ef41963d01887fbfb46c2febb18","Type":"ContainerStarted","Data":"8125b4494d5398e7662fc4f81de4f25c47b9783f118aac1f64c8efa32fb2bb69"} Apr 23 17:52:43.644276 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:43.644255 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal" event={"ID":"2e1ed6752f88ed3103e33f18a9adc980","Type":"ContainerStarted","Data":"1418abb5dbaed46c46c1772eee21c99fb490644c46959a347c649f1aea5a27c1"} Apr 23 17:52:43.700448 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:43.700396 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 17:52:43.930778 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:43.930742 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:52:43.930934 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:43.930854 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="1.6s" Apr 23 17:52:44.119246 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:44.119204 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:44.120309 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:44.120284 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:44.120427 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:44.120325 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:44.120427 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:44.120340 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:44.120427 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:44.120383 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-142-106.ec2.internal" Apr 23 17:52:44.137730 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:44.137701 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-142-106.ec2.internal" Apr 23 17:52:44.509027 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:44.508995 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:45.094139 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:45.094030 2574 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{kube-apiserver-proxy-ip-10-0-142-106.ec2.internal.18a90ddbaaaa9fa6 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-proxy-ip-10-0-142-106.ec2.internal,UID:bad58ef41963d01887fbfb46c2febb18,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{haproxy},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:294af5c64228434d1ed6ee8ea3ac802e3c999aa847223e3b2efa18425a9fe421\" in 1.532s (1.532s including waiting). Image size: 488332864 bytes.,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:45.08667895 +0000 UTC m=+2.984132727,LastTimestamp:2026-04-23 17:52:45.08667895 +0000 UTC m=+2.984132727,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:45.104607 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:45.104521 2574 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddbaacb19dd openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal,UID:2e1ed6752f88ed3103e33f18a9adc980,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\" in 1.55s (1.55s including waiting). Image size: 468435751 bytes.,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:45.088807389 +0000 UTC m=+2.986261147,LastTimestamp:2026-04-23 17:52:45.088807389 +0000 UTC m=+2.986261147,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:45.156948 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:45.156777 2574 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{kube-apiserver-proxy-ip-10-0-142-106.ec2.internal.18a90ddbae5704ca kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-proxy-ip-10-0-142-106.ec2.internal,UID:bad58ef41963d01887fbfb46c2febb18,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{haproxy},},Reason:Created,Message:Created container: haproxy,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:45.148308682 +0000 UTC m=+3.045762446,LastTimestamp:2026-04-23 17:52:45.148308682 +0000 UTC m=+3.045762446,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:45.166461 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:45.166397 2574 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{kube-apiserver-proxy-ip-10-0-142-106.ec2.internal.18a90ddbaebbfac6 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-proxy-ip-10-0-142-106.ec2.internal,UID:bad58ef41963d01887fbfb46c2febb18,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{haproxy},},Reason:Started,Message:Started container haproxy,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:45.154925254 +0000 UTC m=+3.052379018,LastTimestamp:2026-04-23 17:52:45.154925254 +0000 UTC m=+3.052379018,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:45.463152 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:45.463066 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:52:45.507204 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:45.507175 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:45.540492 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:45.540464 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="3.2s" Apr 23 17:52:45.568149 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:45.568075 2574 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddbc6ed3c19 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal,UID:2e1ed6752f88ed3103e33f18a9adc980,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:45.560806425 +0000 UTC m=+3.458260191,LastTimestamp:2026-04-23 17:52:45.560806425 +0000 UTC m=+3.458260191,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:45.578383 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:45.578323 2574 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddbc76908dc openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal,UID:2e1ed6752f88ed3103e33f18a9adc980,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:45.568919772 +0000 UTC m=+3.466373533,LastTimestamp:2026-04-23 17:52:45.568919772 +0000 UTC m=+3.466373533,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:45.648317 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:45.648285 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-142-106.ec2.internal" event={"ID":"bad58ef41963d01887fbfb46c2febb18","Type":"ContainerStarted","Data":"0681fa97c75378881e3fe06b65aac4f28a2116108b85ea92995de475af35ebbe"} Apr 23 17:52:45.648408 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:45.648349 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:45.649470 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:45.649447 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal" event={"ID":"2e1ed6752f88ed3103e33f18a9adc980","Type":"ContainerStarted","Data":"a8e68d70dd4789941efea0bb374b6a3b5fddfb147c35ca2e6a8f4583ccaf1fc5"} Apr 23 17:52:45.649561 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:45.649522 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:45.649665 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:45.649652 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:45.649712 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:45.649677 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:45.649712 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:45.649687 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:45.649826 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:45.649814 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-142-106.ec2.internal\" not found" node="ip-10-0-142-106.ec2.internal" Apr 23 17:52:45.650372 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:45.650360 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:45.650434 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:45.650386 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:45.650434 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:45.650395 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:45.650531 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:45.650523 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-142-106.ec2.internal\" not found" node="ip-10-0-142-106.ec2.internal" Apr 23 17:52:45.738103 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:45.738082 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:45.739126 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:45.739110 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:45.739190 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:45.739143 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:45.739190 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:45.739157 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:45.739190 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:45.739179 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-142-106.ec2.internal" Apr 23 17:52:45.755728 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:45.755707 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-142-106.ec2.internal" Apr 23 17:52:45.762938 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:45.762916 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:52:46.509520 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:46.509492 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:46.652932 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:46.652894 2574 generic.go:358] "Generic (PLEG): container finished" podID="2e1ed6752f88ed3103e33f18a9adc980" containerID="a8e68d70dd4789941efea0bb374b6a3b5fddfb147c35ca2e6a8f4583ccaf1fc5" exitCode=0 Apr 23 17:52:46.653036 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:46.652987 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:46.653036 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:46.652987 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal" event={"ID":"2e1ed6752f88ed3103e33f18a9adc980","Type":"ContainerDied","Data":"a8e68d70dd4789941efea0bb374b6a3b5fddfb147c35ca2e6a8f4583ccaf1fc5"} Apr 23 17:52:46.653036 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:46.653004 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:46.654357 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:46.654340 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:46.654357 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:46.654349 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:46.654480 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:46.654370 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:46.654480 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:46.654376 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:46.654480 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:46.654383 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:46.654480 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:46.654387 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:46.654631 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:46.654616 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-142-106.ec2.internal\" not found" node="ip-10-0-142-106.ec2.internal" Apr 23 17:52:46.656428 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:46.656408 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-142-106.ec2.internal\" not found" node="ip-10-0-142-106.ec2.internal" Apr 23 17:52:46.666070 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:46.665981 2574 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddc085ce513 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal,UID:2e1ed6752f88ed3103e33f18a9adc980,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\" already present on machine,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:46.658643219 +0000 UTC m=+4.556096989,LastTimestamp:2026-04-23 17:52:46.658643219 +0000 UTC m=+4.556096989,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:46.770284 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:46.770214 2574 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddc0e75eff5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal,UID:2e1ed6752f88ed3103e33f18a9adc980,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:46.760947701 +0000 UTC m=+4.658401472,LastTimestamp:2026-04-23 17:52:46.760947701 +0000 UTC m=+4.658401472,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:46.776915 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:46.776851 2574 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddc0ef7fbb1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal,UID:2e1ed6752f88ed3103e33f18a9adc980,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:46.769470385 +0000 UTC m=+4.666924145,LastTimestamp:2026-04-23 17:52:46.769470385 +0000 UTC m=+4.666924145,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:46.807409 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:46.807383 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 17:52:47.055102 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:47.055034 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:52:47.509730 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:47.509703 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:47.655264 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:47.655239 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/0.log" Apr 23 17:52:47.655655 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:47.655633 2574 generic.go:358] "Generic (PLEG): container finished" podID="2e1ed6752f88ed3103e33f18a9adc980" containerID="b9f32a2cb8eb8b93eb3978da5ffce87c5c96cafdbd04904adc6c0ad8c74e6a28" exitCode=1 Apr 23 17:52:47.655726 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:47.655667 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal" event={"ID":"2e1ed6752f88ed3103e33f18a9adc980","Type":"ContainerDied","Data":"b9f32a2cb8eb8b93eb3978da5ffce87c5c96cafdbd04904adc6c0ad8c74e6a28"} Apr 23 17:52:47.655764 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:47.655724 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:47.656490 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:47.656474 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:47.656628 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:47.656504 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:47.656628 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:47.656514 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:47.656716 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:47.656698 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-142-106.ec2.internal\" not found" node="ip-10-0-142-106.ec2.internal" Apr 23 17:52:47.656751 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:47.656743 2574 scope.go:117] "RemoveContainer" containerID="b9f32a2cb8eb8b93eb3978da5ffce87c5c96cafdbd04904adc6c0ad8c74e6a28" Apr 23 17:52:47.666446 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:47.666360 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddc085ce513\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddc085ce513 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal,UID:2e1ed6752f88ed3103e33f18a9adc980,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\" already present on machine,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:46.658643219 +0000 UTC m=+4.556096989,LastTimestamp:2026-04-23 17:52:47.658662475 +0000 UTC m=+5.556116244,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:47.771653 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:47.771524 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddc0e75eff5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddc0e75eff5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal,UID:2e1ed6752f88ed3103e33f18a9adc980,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:46.760947701 +0000 UTC m=+4.658401472,LastTimestamp:2026-04-23 17:52:47.762011491 +0000 UTC m=+5.659465252,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:47.779110 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:47.779029 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddc0ef7fbb1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddc0ef7fbb1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal,UID:2e1ed6752f88ed3103e33f18a9adc980,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:46.769470385 +0000 UTC m=+4.666924145,LastTimestamp:2026-04-23 17:52:47.769951173 +0000 UTC m=+5.667404935,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:48.508912 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:48.508877 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:48.659413 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:48.659380 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/1.log" Apr 23 17:52:48.659875 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:48.659856 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/0.log" Apr 23 17:52:48.660250 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:48.660225 2574 generic.go:358] "Generic (PLEG): container finished" podID="2e1ed6752f88ed3103e33f18a9adc980" containerID="8c8f9ed3f9b4b011c2e30d4e60db428acedaaa5286ef3d333c1f7e1157d4ad94" exitCode=1 Apr 23 17:52:48.660318 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:48.660261 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal" event={"ID":"2e1ed6752f88ed3103e33f18a9adc980","Type":"ContainerDied","Data":"8c8f9ed3f9b4b011c2e30d4e60db428acedaaa5286ef3d333c1f7e1157d4ad94"} Apr 23 17:52:48.660318 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:48.660293 2574 scope.go:117] "RemoveContainer" containerID="b9f32a2cb8eb8b93eb3978da5ffce87c5c96cafdbd04904adc6c0ad8c74e6a28" Apr 23 17:52:48.660420 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:48.660342 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:48.661727 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:48.661346 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:48.661727 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:48.661377 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:48.661727 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:48.661388 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:48.661727 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:48.661655 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-142-106.ec2.internal\" not found" node="ip-10-0-142-106.ec2.internal" Apr 23 17:52:48.661727 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:48.661704 2574 scope.go:117] "RemoveContainer" containerID="8c8f9ed3f9b4b011c2e30d4e60db428acedaaa5286ef3d333c1f7e1157d4ad94" Apr 23 17:52:48.661891 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:48.661859 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_openshift-machine-config-operator(2e1ed6752f88ed3103e33f18a9adc980)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal" podUID="2e1ed6752f88ed3103e33f18a9adc980" Apr 23 17:52:48.669997 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:48.669890 2574 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddc7fc2d57e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal,UID:2e1ed6752f88ed3103e33f18a9adc980,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_openshift-machine-config-operator(2e1ed6752f88ed3103e33f18a9adc980),Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:48.661812606 +0000 UTC m=+6.559266370,LastTimestamp:2026-04-23 17:52:48.661812606 +0000 UTC m=+6.559266370,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:48.750376 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:48.750350 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="6.4s" Apr 23 17:52:48.956421 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:48.956358 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:48.957364 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:48.957346 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:48.957411 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:48.957379 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:48.957411 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:48.957390 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:48.957484 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:48.957415 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-142-106.ec2.internal" Apr 23 17:52:48.974022 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:48.973999 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-142-106.ec2.internal" Apr 23 17:52:49.508089 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:49.508053 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:49.663400 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:49.663376 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/1.log" Apr 23 17:52:49.663870 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:49.663853 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:49.664748 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:49.664733 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:49.664801 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:49.664764 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:49.664801 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:49.664778 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:49.665037 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:49.665021 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-142-106.ec2.internal\" not found" node="ip-10-0-142-106.ec2.internal" Apr 23 17:52:49.665096 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:49.665085 2574 scope.go:117] "RemoveContainer" containerID="8c8f9ed3f9b4b011c2e30d4e60db428acedaaa5286ef3d333c1f7e1157d4ad94" Apr 23 17:52:49.665237 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:49.665220 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_openshift-machine-config-operator(2e1ed6752f88ed3103e33f18a9adc980)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal" podUID="2e1ed6752f88ed3103e33f18a9adc980" Apr 23 17:52:49.674191 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:49.674112 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddc7fc2d57e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddc7fc2d57e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal,UID:2e1ed6752f88ed3103e33f18a9adc980,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_openshift-machine-config-operator(2e1ed6752f88ed3103e33f18a9adc980),Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:48.661812606 +0000 UTC m=+6.559266370,LastTimestamp:2026-04-23 17:52:49.665186645 +0000 UTC m=+7.562640412,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:52:50.423979 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:50.423944 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 17:52:50.509049 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:50.509020 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:50.552316 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:50.552288 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:52:50.767064 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:50.767029 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:52:51.507383 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:51.507352 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:52.319937 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:52.319908 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:52:52.507664 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:52.507633 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:52.573280 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:52.573214 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:52:53.510706 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:53.510676 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:54.507334 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:54.507297 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:55.162629 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:55.162598 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:52:55.374460 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:55.374434 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:55.375627 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:55.375611 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:55.375718 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:55.375642 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:55.375718 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:55.375652 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:55.375718 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:55.375676 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-142-106.ec2.internal" Apr 23 17:52:55.394165 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:55.394135 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-142-106.ec2.internal" Apr 23 17:52:55.508027 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:55.508010 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:56.510371 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:56.510335 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:57.508103 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:57.508075 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:57.705999 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:57.705964 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 17:52:58.507826 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:58.507794 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:59.332679 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:52:59.332649 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:52:59.507624 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:52:59.507594 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:00.512485 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:00.512456 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:01.508543 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:01.508507 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:02.172693 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:02.172664 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:02.395185 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:02.395155 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:02.397287 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:02.397267 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:02.397382 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:02.397300 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:02.397382 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:02.397311 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:02.397382 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:02.397336 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-142-106.ec2.internal" Apr 23 17:53:02.414124 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:02.414096 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-142-106.ec2.internal" Apr 23 17:53:02.507454 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:02.507428 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:02.574014 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:02.573982 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:53:02.863863 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:02.863787 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:53:03.507970 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:03.507943 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:03.640945 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:03.640915 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:03.641933 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:03.641916 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:03.642000 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:03.641946 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:03.642000 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:03.641956 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:03.642185 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:03.642173 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-142-106.ec2.internal\" not found" node="ip-10-0-142-106.ec2.internal" Apr 23 17:53:03.642229 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:03.642221 2574 scope.go:117] "RemoveContainer" containerID="8c8f9ed3f9b4b011c2e30d4e60db428acedaaa5286ef3d333c1f7e1157d4ad94" Apr 23 17:53:03.653136 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:03.653062 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddc085ce513\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddc085ce513 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal,UID:2e1ed6752f88ed3103e33f18a9adc980,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\" already present on machine,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:46.658643219 +0000 UTC m=+4.556096989,LastTimestamp:2026-04-23 17:53:03.644118803 +0000 UTC m=+21.541572570,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:53:03.747323 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:03.747231 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddc0e75eff5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddc0e75eff5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal,UID:2e1ed6752f88ed3103e33f18a9adc980,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:46.760947701 +0000 UTC m=+4.658401472,LastTimestamp:2026-04-23 17:53:03.738478394 +0000 UTC m=+21.635932161,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:53:03.756890 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:03.756807 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddc0ef7fbb1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddc0ef7fbb1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal,UID:2e1ed6752f88ed3103e33f18a9adc980,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:46.769470385 +0000 UTC m=+4.666924145,LastTimestamp:2026-04-23 17:53:03.746552271 +0000 UTC m=+21.644006051,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:53:03.852106 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:03.852070 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:53:04.509543 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:04.509515 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:04.685821 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:04.685795 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/2.log" Apr 23 17:53:04.686156 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:04.686137 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/1.log" Apr 23 17:53:04.686423 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:04.686403 2574 generic.go:358] "Generic (PLEG): container finished" podID="2e1ed6752f88ed3103e33f18a9adc980" containerID="a513f4e50194fee706c457249351bbce466cf57e4119ec8c76da0d387217ebdf" exitCode=1 Apr 23 17:53:04.686497 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:04.686437 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal" event={"ID":"2e1ed6752f88ed3103e33f18a9adc980","Type":"ContainerDied","Data":"a513f4e50194fee706c457249351bbce466cf57e4119ec8c76da0d387217ebdf"} Apr 23 17:53:04.686497 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:04.686470 2574 scope.go:117] "RemoveContainer" containerID="8c8f9ed3f9b4b011c2e30d4e60db428acedaaa5286ef3d333c1f7e1157d4ad94" Apr 23 17:53:04.686611 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:04.686587 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:04.687557 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:04.687389 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:04.687557 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:04.687418 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:04.687557 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:04.687431 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:04.687709 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:04.687694 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-142-106.ec2.internal\" not found" node="ip-10-0-142-106.ec2.internal" Apr 23 17:53:04.687766 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:04.687756 2574 scope.go:117] "RemoveContainer" containerID="a513f4e50194fee706c457249351bbce466cf57e4119ec8c76da0d387217ebdf" Apr 23 17:53:04.687908 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:04.687892 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_openshift-machine-config-operator(2e1ed6752f88ed3103e33f18a9adc980)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal" podUID="2e1ed6752f88ed3103e33f18a9adc980" Apr 23 17:53:04.695257 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:04.695100 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddc7fc2d57e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddc7fc2d57e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal,UID:2e1ed6752f88ed3103e33f18a9adc980,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_openshift-machine-config-operator(2e1ed6752f88ed3103e33f18a9adc980),Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:48.661812606 +0000 UTC m=+6.559266370,LastTimestamp:2026-04-23 17:53:04.687858734 +0000 UTC m=+22.585312497,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:53:05.510253 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:05.510221 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:05.689421 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:05.689396 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/2.log" Apr 23 17:53:06.510852 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:06.510823 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:07.510972 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:07.510941 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:08.508898 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:08.508862 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:09.181470 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:09.181439 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:09.414190 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:09.414166 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:09.415199 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:09.415182 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:09.415306 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:09.415214 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:09.415306 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:09.415224 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:09.415306 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:09.415248 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-142-106.ec2.internal" Apr 23 17:53:09.431779 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:09.431725 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-142-106.ec2.internal" Apr 23 17:53:09.507519 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:09.507496 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:10.509230 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:10.509204 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:11.509280 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:11.509253 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:12.508922 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:12.508895 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:12.574139 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:12.574110 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:53:13.508425 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:13.508399 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:14.053744 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:14.053707 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 17:53:14.511023 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:14.510992 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:15.509569 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:15.509535 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:16.191264 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:16.191223 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:16.432223 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:16.432182 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:16.433266 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:16.433246 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:16.433376 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:16.433288 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:16.433376 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:16.433303 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:16.433376 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:16.433341 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-142-106.ec2.internal" Apr 23 17:53:16.449290 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:16.449222 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-142-106.ec2.internal" Apr 23 17:53:16.508813 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:16.508785 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:17.509867 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:17.509840 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:17.641761 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:17.641722 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:53:17.670438 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:17.670403 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:53:18.018944 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:18.018909 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:53:18.509022 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:18.508993 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:19.507236 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:19.507202 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:19.641267 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:19.641232 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:19.642164 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:19.642139 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:19.642273 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:19.642177 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:19.642273 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:19.642193 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:19.642460 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:19.642445 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-142-106.ec2.internal\" not found" node="ip-10-0-142-106.ec2.internal" Apr 23 17:53:19.642509 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:19.642500 2574 scope.go:117] "RemoveContainer" containerID="a513f4e50194fee706c457249351bbce466cf57e4119ec8c76da0d387217ebdf" Apr 23 17:53:19.642666 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:19.642649 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_openshift-machine-config-operator(2e1ed6752f88ed3103e33f18a9adc980)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal" podUID="2e1ed6752f88ed3103e33f18a9adc980" Apr 23 17:53:19.652056 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:19.651977 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddc7fc2d57e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddc7fc2d57e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal,UID:2e1ed6752f88ed3103e33f18a9adc980,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_openshift-machine-config-operator(2e1ed6752f88ed3103e33f18a9adc980),Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:48.661812606 +0000 UTC m=+6.559266370,LastTimestamp:2026-04-23 17:53:19.642616969 +0000 UTC m=+37.540070733,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:53:20.508953 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:20.508921 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:21.510417 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:21.510392 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:22.506950 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:22.506927 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:22.574484 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:22.574462 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:53:23.201148 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:23.201113 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:23.450142 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:23.450111 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:23.451206 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:23.451153 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:23.451206 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:23.451190 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:23.451206 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:23.451204 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:23.451361 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:23.451231 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-142-106.ec2.internal" Apr 23 17:53:23.469879 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:23.469851 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-142-106.ec2.internal" Apr 23 17:53:23.507255 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:23.507229 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:24.508805 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:24.508774 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:25.509021 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:25.508992 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:26.509721 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:26.509693 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:27.509049 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:27.509018 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:28.509565 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:28.509526 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:29.508565 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:29.508530 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:30.212654 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:30.212619 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:30.470912 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:30.470889 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:30.471914 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:30.471897 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:30.472005 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:30.471928 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:30.472005 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:30.471938 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:30.472005 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:30.471964 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-142-106.ec2.internal" Apr 23 17:53:30.489119 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:30.489092 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-142-106.ec2.internal" Apr 23 17:53:30.507251 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:30.507231 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:31.509717 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:31.509682 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:32.508059 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:32.508023 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:32.575491 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:32.575463 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:53:33.509702 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:33.509674 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:34.507289 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:34.507251 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:34.641662 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:34.641637 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:34.642767 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:34.642743 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:34.642879 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:34.642782 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:34.642879 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:34.642795 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:34.643053 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:34.643039 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-142-106.ec2.internal\" not found" node="ip-10-0-142-106.ec2.internal" Apr 23 17:53:34.643106 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:34.643095 2574 scope.go:117] "RemoveContainer" containerID="a513f4e50194fee706c457249351bbce466cf57e4119ec8c76da0d387217ebdf" Apr 23 17:53:34.653451 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:34.653336 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddc085ce513\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddc085ce513 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal,UID:2e1ed6752f88ed3103e33f18a9adc980,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\" already present on machine,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:46.658643219 +0000 UTC m=+4.556096989,LastTimestamp:2026-04-23 17:53:34.643998035 +0000 UTC m=+52.541451805,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:53:34.748153 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:34.748072 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddc0e75eff5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddc0e75eff5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal,UID:2e1ed6752f88ed3103e33f18a9adc980,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:46.760947701 +0000 UTC m=+4.658401472,LastTimestamp:2026-04-23 17:53:34.740524516 +0000 UTC m=+52.637978287,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:53:34.760055 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:34.759945 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddc0ef7fbb1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddc0ef7fbb1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal,UID:2e1ed6752f88ed3103e33f18a9adc980,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:46.769470385 +0000 UTC m=+4.666924145,LastTimestamp:2026-04-23 17:53:34.749405553 +0000 UTC m=+52.646859316,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:53:35.508998 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:35.508959 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:35.730029 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:35.729998 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/3.log" Apr 23 17:53:35.730433 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:35.730415 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/2.log" Apr 23 17:53:35.730804 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:35.730779 2574 generic.go:358] "Generic (PLEG): container finished" podID="2e1ed6752f88ed3103e33f18a9adc980" containerID="e8020d570f9039b4ea4b50995811271e8802b087b8aa971f3dc2f3eb5e22620f" exitCode=1 Apr 23 17:53:35.730879 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:35.730818 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal" event={"ID":"2e1ed6752f88ed3103e33f18a9adc980","Type":"ContainerDied","Data":"e8020d570f9039b4ea4b50995811271e8802b087b8aa971f3dc2f3eb5e22620f"} Apr 23 17:53:35.730879 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:35.730858 2574 scope.go:117] "RemoveContainer" containerID="a513f4e50194fee706c457249351bbce466cf57e4119ec8c76da0d387217ebdf" Apr 23 17:53:35.731069 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:35.731035 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:35.732125 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:35.732106 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:35.732213 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:35.732144 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:35.732213 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:35.732158 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:35.732622 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:35.732446 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-142-106.ec2.internal\" not found" node="ip-10-0-142-106.ec2.internal" Apr 23 17:53:35.732622 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:35.732513 2574 scope.go:117] "RemoveContainer" containerID="e8020d570f9039b4ea4b50995811271e8802b087b8aa971f3dc2f3eb5e22620f" Apr 23 17:53:35.732752 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:35.732727 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_openshift-machine-config-operator(2e1ed6752f88ed3103e33f18a9adc980)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal" podUID="2e1ed6752f88ed3103e33f18a9adc980" Apr 23 17:53:35.743422 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:35.743337 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddc7fc2d57e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddc7fc2d57e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal,UID:2e1ed6752f88ed3103e33f18a9adc980,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_openshift-machine-config-operator(2e1ed6752f88ed3103e33f18a9adc980),Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:48.661812606 +0000 UTC m=+6.559266370,LastTimestamp:2026-04-23 17:53:35.732677194 +0000 UTC m=+53.630130964,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:53:36.512116 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:36.512083 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:36.733292 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:36.733266 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/3.log" Apr 23 17:53:37.222960 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:37.222928 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:37.489518 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:37.489445 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:37.490510 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:37.490491 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:37.490556 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:37.490527 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:37.490556 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:37.490542 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:37.490654 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:37.490587 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-142-106.ec2.internal" Apr 23 17:53:37.507496 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:37.507466 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:37.507496 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:37.507480 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-142-106.ec2.internal" Apr 23 17:53:38.502531 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:38.502498 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:39.513023 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:39.512987 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:40.427761 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:40.427729 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 17:53:40.509516 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:40.509491 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:41.507267 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:41.507229 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:42.510196 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:42.510165 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:42.576558 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:42.576533 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:53:43.506735 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:43.506705 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:44.232290 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:44.232257 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:44.507883 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:44.507795 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:44.508075 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:44.508050 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:44.509558 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:44.509540 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:44.509674 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:44.509595 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:44.509674 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:44.509612 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:44.509674 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:44.509649 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-142-106.ec2.internal" Apr 23 17:53:44.525792 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:44.525767 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-142-106.ec2.internal" Apr 23 17:53:45.511773 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:45.511736 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:46.506755 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:46.506727 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:47.513533 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:47.513503 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:48.507112 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:48.507081 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:49.509946 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:49.509917 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:49.641042 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:49.641015 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:49.642893 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:49.642874 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:49.642985 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:49.642907 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:49.642985 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:49.642917 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:49.643136 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:49.643123 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-142-106.ec2.internal\" not found" node="ip-10-0-142-106.ec2.internal" Apr 23 17:53:49.643179 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:49.643171 2574 scope.go:117] "RemoveContainer" containerID="e8020d570f9039b4ea4b50995811271e8802b087b8aa971f3dc2f3eb5e22620f" Apr 23 17:53:49.643316 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:49.643292 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_openshift-machine-config-operator(2e1ed6752f88ed3103e33f18a9adc980)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal" podUID="2e1ed6752f88ed3103e33f18a9adc980" Apr 23 17:53:49.651126 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:49.651046 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddc7fc2d57e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddc7fc2d57e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal,UID:2e1ed6752f88ed3103e33f18a9adc980,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_openshift-machine-config-operator(2e1ed6752f88ed3103e33f18a9adc980),Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:48.661812606 +0000 UTC m=+6.559266370,LastTimestamp:2026-04-23 17:53:49.643264964 +0000 UTC m=+67.540718734,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:53:50.509546 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:50.509516 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:51.240127 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:51.240090 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:51.511918 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:51.511847 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:51.526080 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:51.526062 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:51.527003 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:51.526977 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:51.527110 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:51.527014 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:51.527110 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:51.527029 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:51.527110 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:51.527060 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-142-106.ec2.internal" Apr 23 17:53:51.545784 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:51.545760 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-142-106.ec2.internal" Apr 23 17:53:52.509645 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:52.509521 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:52.576929 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:52.576896 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:53:53.509839 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:53.509805 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:54.509851 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:54.509816 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:55.514789 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:55.514753 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:56.511722 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:56.511689 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:57.511201 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:57.511170 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:58.251110 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:58.251077 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:58.514567 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:58.514504 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:58.545842 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:58.545821 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:58.547941 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:58.547920 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:58.548038 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:58.547956 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:58.548038 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:58.547966 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:58.548038 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:58.547995 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-142-106.ec2.internal" Apr 23 17:53:58.568187 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:53:58.568163 2574 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-142-106.ec2.internal" Apr 23 17:53:59.508033 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:53:59.508007 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:00.511324 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:00.511286 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:01.438754 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:01.438716 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:54:01.509872 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:01.509843 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:02.255811 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:02.255768 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:54:02.509722 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:02.509654 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:02.577179 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:02.577148 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:03.511213 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:03.511182 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:03.602404 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:03.602371 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:54:03.641242 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:03.641221 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:54:03.642256 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:03.642234 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:54:03.642353 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:03.642272 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:54:03.642353 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:03.642288 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:54:03.642566 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:03.642550 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-142-106.ec2.internal\" not found" node="ip-10-0-142-106.ec2.internal" Apr 23 17:54:03.642642 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:03.642629 2574 scope.go:117] "RemoveContainer" containerID="e8020d570f9039b4ea4b50995811271e8802b087b8aa971f3dc2f3eb5e22620f" Apr 23 17:54:03.642803 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:03.642783 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_openshift-machine-config-operator(2e1ed6752f88ed3103e33f18a9adc980)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal" podUID="2e1ed6752f88ed3103e33f18a9adc980" Apr 23 17:54:03.650504 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:03.650420 2574 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddc7fc2d57e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal.18a90ddc7fc2d57e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal,UID:2e1ed6752f88ed3103e33f18a9adc980,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_openshift-machine-config-operator(2e1ed6752f88ed3103e33f18a9adc980),Source:EventSource{Component:kubelet,Host:ip-10-0-142-106.ec2.internal,},FirstTimestamp:2026-04-23 17:52:48.661812606 +0000 UTC m=+6.559266370,LastTimestamp:2026-04-23 17:54:03.642746895 +0000 UTC m=+81.540200656,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-106.ec2.internal,}" Apr 23 17:54:04.510759 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:04.510723 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-106.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:04.640841 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:04.640804 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:54:04.641899 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:04.641881 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:54:04.642005 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:04.641913 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:54:04.642005 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:04.641923 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:54:04.642151 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:04.642138 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-142-106.ec2.internal\" not found" node="ip-10-0-142-106.ec2.internal" Apr 23 17:54:05.260612 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:05.260558 2574 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-142-106.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:54:05.269358 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:05.269337 2574 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-4h84b" Apr 23 17:54:05.431331 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:05.431292 2574 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Apr 23 17:54:05.530953 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:05.530901 2574 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-142-106.ec2.internal" not found Apr 23 17:54:05.562628 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:05.562609 2574 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-142-106.ec2.internal" not found Apr 23 17:54:05.568756 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:05.568737 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:54:05.569723 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:05.569699 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:54:05.569791 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:05.569731 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:54:05.569791 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:05.569743 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:54:05.569791 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:05.569769 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-142-106.ec2.internal" Apr 23 17:54:05.592754 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:05.592732 2574 kubelet_node_status.go:81] "Successfully registered node" node="ip-10-0-142-106.ec2.internal" Apr 23 17:54:05.592833 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:05.592755 2574 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-142-106.ec2.internal\": node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:05.642754 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:05.642725 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:05.743004 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:05.742972 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:05.843512 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:05.843462 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:05.944008 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:05.943978 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:06.044784 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:06.044758 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:06.145421 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:06.145358 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:06.245951 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:06.245923 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:06.270467 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:06.270434 2574 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2028-04-22 17:49:05 +0000 UTC" deadline="2027-10-16 09:47:06.974853136 +0000 UTC" Apr 23 17:54:06.270539 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:06.270466 2574 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="12975h53m0.704392316s" Apr 23 17:54:06.347056 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:06.347022 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:06.447655 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:06.447598 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:06.547800 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:06.547773 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:06.554943 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:06.554928 2574 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Apr 23 17:54:06.573561 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:06.573532 2574 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 23 17:54:06.625982 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:06.625961 2574 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-s6h9p" Apr 23 17:54:06.635370 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:06.635346 2574 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-s6h9p" Apr 23 17:54:06.647889 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:06.647872 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:06.748447 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:06.748410 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:06.848921 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:06.848890 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:06.949453 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:06.949427 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:07.050299 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:07.050249 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:07.150849 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:07.150821 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:07.251551 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:07.251525 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:07.352158 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:07.352104 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:07.452829 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:07.452806 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:07.553635 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:07.553610 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:07.636991 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:07.636927 2574 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-22 17:49:06 +0000 UTC" deadline="2027-12-30 00:29:23.239797134 +0000 UTC" Apr 23 17:54:07.636991 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:07.636954 2574 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="14766h35m15.602847806s" Apr 23 17:54:07.654208 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:07.654187 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:07.754803 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:07.754768 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:07.855253 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:07.855227 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:07.955895 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:07.955830 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:08.056502 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:08.056475 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:08.157030 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:08.157004 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:08.257617 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:08.257598 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:08.358228 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:08.358203 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:08.458450 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:08.458426 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:08.559517 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:08.559464 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:08.637221 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:08.637189 2574 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-22 17:49:06 +0000 UTC" deadline="2027-12-12 11:07:53.290535543 +0000 UTC" Apr 23 17:54:08.637221 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:08.637219 2574 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="14345h13m44.653321617s" Apr 23 17:54:08.660465 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:08.660450 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:08.761038 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:08.761006 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:08.861990 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:08.861929 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:08.962032 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:08.962002 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:09.062675 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:09.062637 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:09.163233 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:09.163190 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:09.263773 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:09.263752 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:09.364327 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:09.364300 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:09.465049 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:09.465027 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:09.565961 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:09.565941 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:09.666929 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:09.666910 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:09.767585 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:09.767505 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:09.867869 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:09.867837 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:09.968599 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:09.968553 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:10.069337 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:10.069279 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:10.169902 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:10.169877 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:10.269965 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:10.269931 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:10.370612 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:10.370530 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:10.471377 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:10.471352 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:10.572375 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:10.572351 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:10.673409 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:10.673353 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:10.773979 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:10.773955 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:10.874999 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:10.874974 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:10.975073 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:10.975046 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:11.075818 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:11.075788 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:11.176408 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:11.176378 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:11.276950 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:11.276895 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:11.377491 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:11.377464 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:11.478078 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:11.478036 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:11.578787 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:11.578720 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:11.679586 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:11.679545 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:11.780037 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:11.780005 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:11.880785 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:11.880715 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:11.981367 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:11.981333 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:12.082439 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:12.082417 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:12.183064 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:12.183016 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:12.283768 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:12.283741 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:12.384333 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:12.384312 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:12.484703 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:12.484683 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:12.578128 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:12.578094 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:12.584982 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:12.584966 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:12.685809 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:12.685772 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:12.786173 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:12.786116 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:12.886670 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:12.886636 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:12.987272 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:12.987239 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:13.088068 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:13.087997 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:13.189082 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:13.189062 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:13.289666 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:13.289631 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:13.390192 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:13.390129 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:13.490797 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:13.490771 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:13.591554 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:13.591527 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:13.691934 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:13.691887 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:13.792288 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:13.792264 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:13.892949 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:13.892921 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:13.993558 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:13.993534 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:14.094153 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:14.094121 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:14.194712 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:14.194678 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:14.295448 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:14.295390 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:14.395995 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:14.395974 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:14.496371 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:14.496346 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:14.597451 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:14.597407 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:14.697606 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:14.697590 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:14.798602 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:14.798567 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:14.899218 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:14.899159 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:14.999408 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:14.999387 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:15.082790 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:15.082767 2574 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:54:15.100014 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:15.099991 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:15.200656 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:15.200601 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:15.301508 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:15.301481 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:15.402076 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:15.402047 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:15.502380 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:15.502360 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:15.603241 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:15.603216 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:15.704167 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:15.704134 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:15.804738 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:15.804682 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:15.905212 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:15.905178 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:15.962805 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:15.962787 2574 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-142-106.ec2.internal\": node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:16.005655 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:16.005633 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:16.106139 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:16.106072 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:16.206655 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:16.206625 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:16.307184 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:16.307161 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:16.407795 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:16.407729 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:16.508808 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:16.508778 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:16.609409 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:16.609380 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:16.640902 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:16.640881 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:54:16.641936 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:16.641916 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:54:16.642022 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:16.641951 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:54:16.642022 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:16.641966 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:54:16.642282 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:16.642264 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-142-106.ec2.internal\" not found" node="ip-10-0-142-106.ec2.internal" Apr 23 17:54:16.642339 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:16.642327 2574 scope.go:117] "RemoveContainer" containerID="e8020d570f9039b4ea4b50995811271e8802b087b8aa971f3dc2f3eb5e22620f" Apr 23 17:54:16.709707 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:16.709686 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:16.788367 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:16.788342 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/3.log" Apr 23 17:54:16.788655 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:16.788636 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal" event={"ID":"2e1ed6752f88ed3103e33f18a9adc980","Type":"ContainerStarted","Data":"f6593faae60eb597e38cecc27e8929b4b16f2304e7cf5be91d1c67d85a75e434"} Apr 23 17:54:16.788747 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:16.788736 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:54:16.789483 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:16.789467 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:54:16.789537 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:16.789496 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:54:16.789537 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:16.789506 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:54:16.789685 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:16.789673 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-142-106.ec2.internal\" not found" node="ip-10-0-142-106.ec2.internal" Apr 23 17:54:16.810006 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:16.809989 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:16.910505 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:16.910485 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:17.011357 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:17.011337 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:17.111957 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:17.111926 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:17.212484 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:17.212467 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:17.313073 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:17.313016 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:17.413625 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:17.413593 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:17.513956 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:17.513929 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:17.614328 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:17.614274 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:17.715235 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:17.715206 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:17.815499 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:17.815476 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:17.916018 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:17.915959 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:18.016258 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:18.016234 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:18.116851 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:18.116822 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:18.217369 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:18.217351 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:18.317947 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:18.317921 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:18.418497 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:18.418473 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:18.519382 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:18.519334 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:18.620028 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:18.620005 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:18.720948 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:18.720915 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:18.821445 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:18.821393 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:18.921966 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:18.921936 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:19.022199 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:19.022168 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:19.122796 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:19.122743 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:19.223327 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:19.223287 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:19.323881 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:19.323860 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:19.424451 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:19.424402 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:19.525255 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:19.525236 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:19.625915 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:19.625889 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:19.726128 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:19.726106 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:19.826742 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:19.826718 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:19.927290 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:19.927262 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:20.027919 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:20.027869 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:20.128469 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:20.128441 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:20.229304 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:20.229284 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:20.329878 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:20.329827 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:20.430402 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:20.430369 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:20.531396 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:20.531372 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:20.632267 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:20.632220 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:20.733037 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:20.733016 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:20.833623 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:20.833599 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:20.934200 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:20.934145 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:21.034969 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:21.034947 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:21.135453 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:21.135430 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:21.236005 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:21.235979 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:21.336182 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:21.336159 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:21.436695 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:21.436667 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:21.537826 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:21.537761 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:21.638789 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:21.638757 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:21.739620 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:21.739594 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:21.840258 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:21.840205 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:21.941151 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:21.941115 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:22.041998 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:22.041968 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:22.142532 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:22.142483 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:22.242974 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:22.242953 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:22.343491 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:22.343460 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:22.444350 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:22.444303 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:22.545307 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:22.545277 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:22.578623 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:22.578606 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:22.645869 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:22.645833 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:22.746402 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:22.746380 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:22.846726 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:22.846697 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:22.947698 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:22.947682 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:23.048498 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:23.048454 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:23.149001 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:23.148971 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:23.249849 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:23.249830 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:23.350445 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:23.350402 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:23.450970 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:23.450940 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:23.552040 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:23.551994 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:23.652931 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:23.652881 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:23.753484 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:23.753460 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:23.854268 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:23.854233 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:23.954849 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:23.954788 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:24.055510 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:24.055479 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:24.156483 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:24.156459 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:24.257012 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:24.256994 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:24.357616 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:24.357593 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:24.458258 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:24.458229 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:24.559257 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:24.559198 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:24.660197 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:24.660164 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:24.760715 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:24.760694 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:24.861763 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:24.861703 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:24.962407 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:24.962377 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:25.063385 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:25.063366 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:25.164012 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:25.163961 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:25.264399 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:25.264379 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:25.364985 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:25.364955 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:25.465628 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:25.465605 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:25.566584 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:25.566557 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:25.667546 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:25.667520 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:25.768064 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:25.768017 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:25.868635 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:25.868599 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:25.969233 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:25.969204 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:26.069948 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:26.069899 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:26.084063 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:26.084040 2574 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-142-106.ec2.internal\": node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:26.170357 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:26.170322 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:26.270919 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:26.270882 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:26.371432 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:26.371372 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:26.472129 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:26.472103 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:26.573086 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:26.573056 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:26.674079 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:26.674022 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:26.774633 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:26.774604 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:26.875152 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:26.875129 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:26.975790 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:26.975770 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:27.076441 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:27.076419 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:27.176921 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:27.176896 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:27.277368 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:27.277316 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:27.377898 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:27.377866 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:27.478594 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:27.478557 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:27.578869 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:27.578808 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:27.679335 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:27.679290 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:27.779794 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:27.779763 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:27.880613 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:27.880523 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:27.981643 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:27.981609 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:28.082342 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:28.082311 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:28.182924 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:28.182874 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:28.283623 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:28.283596 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:28.384154 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:28.384125 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:28.484193 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:28.484167 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:28.584497 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:28.584469 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:28.684898 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:28.684870 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:28.785505 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:28.785454 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:28.885913 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:28.885882 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:28.986625 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:28.986591 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:29.087394 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:29.087332 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:29.187924 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:29.187905 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:29.288095 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:29.288074 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:29.388484 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:29.388433 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:29.489184 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:29.489158 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:29.589428 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:29.589409 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:29.689937 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:29.689871 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:29.789959 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:29.789928 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:29.890313 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:29.890287 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:29.991059 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:29.991021 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:30.091643 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:30.091616 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:30.192174 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:30.192139 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:30.292688 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:30.292640 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:30.393173 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:30.393152 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:30.493983 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:30.493950 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:30.594795 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:30.594742 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:30.695777 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:30.695746 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:30.796767 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:30.796745 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:30.897203 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:30.897141 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:30.997930 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:30.997903 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:31.098495 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:31.098465 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:31.199062 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:31.199006 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:31.299649 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:31.299623 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:31.399760 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:31.399731 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:31.500753 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:31.500716 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:31.601654 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:31.601606 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:31.702544 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:31.702513 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:31.803228 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:31.803164 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:31.904099 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:31.904065 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:32.004949 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:32.004919 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:32.105513 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:32.105454 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:32.206440 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:32.206416 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:32.307097 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:32.307067 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:32.407656 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:32.407602 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:32.508094 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:32.508071 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:32.579715 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:32.579691 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:32.608564 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:32.608545 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:32.709409 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:32.709368 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:32.809954 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:32.809916 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:32.910310 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:32.910286 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:33.011199 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:33.011171 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:33.111767 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:33.111733 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:33.212793 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:33.212766 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:33.313797 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:33.313747 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:33.414333 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:33.414308 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:33.515361 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:33.515332 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:33.615708 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:33.615657 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:33.716470 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:33.716441 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:33.816842 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:33.816823 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:33.917893 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:33.917833 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:34.018682 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:34.018649 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:34.119624 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:34.119607 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:34.220197 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:34.220172 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:34.320778 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:34.320742 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:34.421355 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:34.421309 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:34.522307 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:34.522260 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:34.622621 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:34.622594 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:34.723474 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:34.723450 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:34.823961 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:34.823905 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:34.924400 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:34.924370 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:35.025287 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:35.025245 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:35.125951 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:35.125885 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:35.226541 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:35.226508 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:35.327026 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:35.326989 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:35.428039 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:35.427970 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:35.528781 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:35.528746 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:35.628864 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:35.628842 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:35.729850 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:35.729830 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:35.830366 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:35.830342 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:35.930954 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:35.930915 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:36.031839 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:36.031774 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:36.132367 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:36.132334 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:36.232943 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:36.232913 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:36.333512 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:36.333465 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:36.433993 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:36.433964 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:36.446130 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:36.446112 2574 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-142-106.ec2.internal\": node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:36.534282 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:36.534254 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:36.634978 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:36.634923 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:36.735850 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:36.735830 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:36.836719 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:36.836695 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:36.937382 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:36.937302 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:37.038237 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:37.038196 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:37.138775 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:37.138748 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:37.239321 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:37.239297 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:37.339910 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:37.339876 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:37.440619 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:37.440588 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:37.541491 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:37.541434 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:37.641913 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:37.641886 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:37.742583 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:37.742549 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:37.843105 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:37.843049 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:37.943642 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:37.943602 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:38.044507 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:38.044480 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:38.145031 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:38.144988 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:38.245505 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:38.245474 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:38.346524 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:38.346502 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:38.447159 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:38.447104 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:38.547963 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:38.547939 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:38.648657 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:38.648633 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:38.749354 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:38.749337 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:38.850363 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:38.850337 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:38.951436 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:38.951412 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:39.052320 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:39.052254 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:39.152849 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:39.152816 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:39.253396 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:39.253366 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:39.353862 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:39.353808 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:39.454405 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:39.454375 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:39.555243 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:39.555213 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:39.655491 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:39.655444 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:39.756009 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:39.755989 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:39.856660 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:39.856635 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:39.957409 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:39.957355 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:40.058012 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:40.057983 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:40.158626 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:40.158602 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:40.259194 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:40.259164 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:40.359771 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:40.359748 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:40.460444 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:40.460429 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:40.560541 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:40.560479 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:40.661378 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:40.661361 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:40.762213 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:40.762190 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:40.862688 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:40.862637 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:40.963360 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:40.963332 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:41.064004 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:41.063980 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:41.164120 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:41.164065 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:41.265039 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:41.265016 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:41.365632 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:41.365601 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:41.466698 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:41.466679 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:41.567552 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:41.567529 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:41.668434 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:41.668413 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:41.769054 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:41.769005 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:41.870041 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:41.870019 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:41.970750 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:41.970727 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:42.070849 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:42.070812 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:42.171419 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:42.171394 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:42.272133 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:42.272103 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:42.372740 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:42.372691 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:42.473152 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:42.473133 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:42.573585 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:42.573554 2574 kubelet_node_status.go:509] "Node not becoming ready in time after startup" Apr 23 17:54:42.579814 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:42.579797 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:42.597602 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:42.597563 2574 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:46.485460 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:46.485410 2574 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-142-106.ec2.internal\": node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:47.598894 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:47.598862 2574 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:51.934812 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:51.934774 2574 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:54:51.975779 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:54:51.975747 2574 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:54:52.580669 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:52.580633 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:52.599192 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:52.599163 2574 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:56.791224 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:56.791178 2574 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-142-106.ec2.internal\": node \"ip-10-0-142-106.ec2.internal\" not found" Apr 23 17:54:57.600465 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:54:57.600426 2574 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:55:00.122481 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.122448 2574 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:55:00.205999 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.205964 2574 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal" Apr 23 17:55:00.220760 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.220741 2574 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 23 17:55:00.220846 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.220834 2574 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-142-106.ec2.internal" Apr 23 17:55:00.228975 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.228962 2574 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 23 17:55:00.560259 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.560235 2574 apiserver.go:52] "Watching apiserver" Apr 23 17:55:00.568473 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.568453 2574 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Apr 23 17:55:00.568851 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.568830 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-4zn98","openshift-network-operator/iptables-alerter-z254x","kube-system/global-pull-secret-syncer-p5ndb","kube-system/konnectivity-agent-wx25k","openshift-cluster-node-tuning-operator/tuned-fggsl","openshift-dns/node-resolver-cnp6f","openshift-multus/multus-5bjmz","openshift-multus/network-metrics-daemon-45ztw","openshift-network-diagnostics/network-check-target-88zs6","openshift-ovn-kubernetes/ovnkube-node-2kfrf","kube-system/kube-apiserver-proxy-ip-10-0-142-106.ec2.internal","openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-pctgs","openshift-image-registry/node-ca-xcs8j","openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal"] Apr 23 17:55:00.572151 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.572133 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.574446 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.574426 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-z254x" Apr 23 17:55:00.575317 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.575289 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Apr 23 17:55:00.575457 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.575320 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Apr 23 17:55:00.575457 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.575338 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Apr 23 17:55:00.575457 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.575300 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Apr 23 17:55:00.575635 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.575451 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-2vg5p\"" Apr 23 17:55:00.575807 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.575794 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Apr 23 17:55:00.576502 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.576486 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Apr 23 17:55:00.576625 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.576609 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-pctgs" Apr 23 17:55:00.578468 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.578449 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Apr 23 17:55:00.578789 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.578773 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-wx25k" Apr 23 17:55:00.579557 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.579384 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Apr 23 17:55:00.579742 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.579719 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-metrics-serving-cert\"" Apr 23 17:55:00.579842 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.579762 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Apr 23 17:55:00.579842 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.579782 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"openshift-service-ca.crt\"" Apr 23 17:55:00.579842 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.579817 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"kube-root-ca.crt\"" Apr 23 17:55:00.580028 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.579837 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-sa-dockercfg-k7xsh\"" Apr 23 17:55:00.580028 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.579785 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-dockercfg-2758j\"" Apr 23 17:55:00.581169 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.581150 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.581249 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.581151 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kube-system\"/\"konnectivity-ca-bundle\"" Apr 23 17:55:00.581504 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.581488 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"konnectivity-agent\"" Apr 23 17:55:00.581696 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.581681 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"default-dockercfg-z5mb4\"" Apr 23 17:55:00.585827 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.585786 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-cnp6f" Apr 23 17:55:00.585980 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.585942 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"kube-root-ca.crt\"" Apr 23 17:55:00.586491 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.586371 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"tuned-dockercfg-r6lhx\"" Apr 23 17:55:00.588062 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.588042 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.588450 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.588435 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Apr 23 17:55:00.588521 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.588459 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-x8mkm\"" Apr 23 17:55:00.588629 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.588617 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Apr 23 17:55:00.590293 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.590275 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45ztw" Apr 23 17:55:00.590385 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:00.590342 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45ztw" podUID="5af1b6bf-71a6-4257-9a8a-b48c1c14659c" Apr 23 17:55:00.592482 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.592462 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"openshift-service-ca.crt\"" Apr 23 17:55:00.592585 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.592549 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Apr 23 17:55:00.592654 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.592554 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Apr 23 17:55:00.592654 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.592616 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Apr 23 17:55:00.592654 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.592632 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Apr 23 17:55:00.592784 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.592681 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-4s597\"" Apr 23 17:55:00.593735 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.593717 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-4zn98" Apr 23 17:55:00.596044 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.596027 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-p5ndb" Apr 23 17:55:00.596141 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:00.596083 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-p5ndb" podUID="f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb" Apr 23 17:55:00.598123 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.598106 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Apr 23 17:55:00.598264 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.598247 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Apr 23 17:55:00.598340 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.598298 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-btltb\"" Apr 23 17:55:00.598398 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.598356 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xcs8j" Apr 23 17:55:00.600568 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.600550 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-88zs6" Apr 23 17:55:00.600800 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:00.600782 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-88zs6" podUID="35ee14f0-f248-4da4-a578-5901f2cd8f5f" Apr 23 17:55:00.603785 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.603768 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Apr 23 17:55:00.603899 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.603775 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Apr 23 17:55:00.603996 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.603982 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Apr 23 17:55:00.604200 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.604188 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-xb587\"" Apr 23 17:55:00.608841 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.608680 2574 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 23 17:55:00.706668 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.706647 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/424e3ffc-c16a-4133-9d02-752d3ff52059-socket-dir\") pod \"aws-ebs-csi-driver-node-pctgs\" (UID: \"424e3ffc-c16a-4133-9d02-752d3ff52059\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-pctgs" Apr 23 17:55:00.706749 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.706673 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/84132246-7311-4103-a045-d865e6d62737-cnibin\") pod \"multus-additional-cni-plugins-4zn98\" (UID: \"84132246-7311-4103-a045-d865e6d62737\") " pod="openshift-multus/multus-additional-cni-plugins-4zn98" Apr 23 17:55:00.706749 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.706691 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/84132246-7311-4103-a045-d865e6d62737-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-4zn98\" (UID: \"84132246-7311-4103-a045-d865e6d62737\") " pod="openshift-multus/multus-additional-cni-plugins-4zn98" Apr 23 17:55:00.706749 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.706715 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb-original-pull-secret\") pod \"global-pull-secret-syncer-p5ndb\" (UID: \"f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb\") " pod="kube-system/global-pull-secret-syncer-p5ndb" Apr 23 17:55:00.706749 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.706739 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-system-cni-dir\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.706902 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.706758 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-hostroot\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.706902 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.706773 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-etc-openvswitch\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.706902 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.706788 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-host-run-ovn-kubernetes\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.706902 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.706805 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26rwf\" (UniqueName: \"kubernetes.io/projected/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-kube-api-access-26rwf\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.706902 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.706829 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/70e049a1-02dc-4e15-94c0-0119bbff0af3-etc-tuned\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.706902 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.706850 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8vl6\" (UniqueName: \"kubernetes.io/projected/424e3ffc-c16a-4133-9d02-752d3ff52059-kube-api-access-f8vl6\") pod \"aws-ebs-csi-driver-node-pctgs\" (UID: \"424e3ffc-c16a-4133-9d02-752d3ff52059\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-pctgs" Apr 23 17:55:00.706902 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.706864 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/0dbc6169-c545-4cc9-a3ea-83c161f64108-agent-certs\") pod \"konnectivity-agent-wx25k\" (UID: \"0dbc6169-c545-4cc9-a3ea-83c161f64108\") " pod="kube-system/konnectivity-agent-wx25k" Apr 23 17:55:00.706902 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.706878 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-node-log\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.706902 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.706891 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-ovn-node-metrics-cert\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.707172 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.706914 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70e049a1-02dc-4e15-94c0-0119bbff0af3-etc-kubernetes\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.707172 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.706932 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9e31c73e-77bf-4968-b370-f732e248be97-serviceca\") pod \"node-ca-xcs8j\" (UID: \"9e31c73e-77bf-4968-b370-f732e248be97\") " pod="openshift-image-registry/node-ca-xcs8j" Apr 23 17:55:00.707172 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.706947 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-log-socket\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.707172 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.706960 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/424e3ffc-c16a-4133-9d02-752d3ff52059-device-dir\") pod \"aws-ebs-csi-driver-node-pctgs\" (UID: \"424e3ffc-c16a-4133-9d02-752d3ff52059\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-pctgs" Apr 23 17:55:00.707172 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.706979 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/84132246-7311-4103-a045-d865e6d62737-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-4zn98\" (UID: \"84132246-7311-4103-a045-d865e6d62737\") " pod="openshift-multus/multus-additional-cni-plugins-4zn98" Apr 23 17:55:00.707172 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707002 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-run-openvswitch\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.707172 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707031 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.707172 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707047 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9e31c73e-77bf-4968-b370-f732e248be97-host\") pod \"node-ca-xcs8j\" (UID: \"9e31c73e-77bf-4968-b370-f732e248be97\") " pod="openshift-image-registry/node-ca-xcs8j" Apr 23 17:55:00.707172 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707084 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/0dbc6169-c545-4cc9-a3ea-83c161f64108-konnectivity-ca\") pod \"konnectivity-agent-wx25k\" (UID: \"0dbc6169-c545-4cc9-a3ea-83c161f64108\") " pod="kube-system/konnectivity-agent-wx25k" Apr 23 17:55:00.707172 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707108 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb-kubelet-config\") pod \"global-pull-secret-syncer-p5ndb\" (UID: \"f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb\") " pod="kube-system/global-pull-secret-syncer-p5ndb" Apr 23 17:55:00.707172 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707124 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb-dbus\") pod \"global-pull-secret-syncer-p5ndb\" (UID: \"f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb\") " pod="kube-system/global-pull-secret-syncer-p5ndb" Apr 23 17:55:00.707172 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707154 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-host-cni-netd\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.707508 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707178 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5af1b6bf-71a6-4257-9a8a-b48c1c14659c-metrics-certs\") pod \"network-metrics-daemon-45ztw\" (UID: \"5af1b6bf-71a6-4257-9a8a-b48c1c14659c\") " pod="openshift-multus/network-metrics-daemon-45ztw" Apr 23 17:55:00.707508 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707192 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-host-kubelet\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.707508 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707205 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/70e049a1-02dc-4e15-94c0-0119bbff0af3-etc-sysctl-conf\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.707508 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707240 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4fed6b9f-295e-4b13-8a53-cddd432bda46-hosts-file\") pod \"node-resolver-cnp6f\" (UID: \"4fed6b9f-295e-4b13-8a53-cddd432bda46\") " pod="openshift-dns/node-resolver-cnp6f" Apr 23 17:55:00.707508 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707261 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-host-slash\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.707508 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707289 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-etc-kubernetes\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.707508 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707315 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-cnibin\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.707508 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707330 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxrr4\" (UniqueName: \"kubernetes.io/projected/5645d713-95ce-41af-878d-48178971c03c-kube-api-access-xxrr4\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.707508 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707349 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/84132246-7311-4103-a045-d865e6d62737-os-release\") pod \"multus-additional-cni-plugins-4zn98\" (UID: \"84132246-7311-4103-a045-d865e6d62737\") " pod="openshift-multus/multus-additional-cni-plugins-4zn98" Apr 23 17:55:00.707508 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707366 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/70e049a1-02dc-4e15-94c0-0119bbff0af3-run\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.707508 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707381 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdfgg\" (UniqueName: \"kubernetes.io/projected/4fed6b9f-295e-4b13-8a53-cddd432bda46-kube-api-access-cdfgg\") pod \"node-resolver-cnp6f\" (UID: \"4fed6b9f-295e-4b13-8a53-cddd432bda46\") " pod="openshift-dns/node-resolver-cnp6f" Apr 23 17:55:00.707508 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707402 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/424e3ffc-c16a-4133-9d02-752d3ff52059-registration-dir\") pod \"aws-ebs-csi-driver-node-pctgs\" (UID: \"424e3ffc-c16a-4133-9d02-752d3ff52059\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-pctgs" Apr 23 17:55:00.707508 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707426 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/70e049a1-02dc-4e15-94c0-0119bbff0af3-etc-sysctl-d\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.707508 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707442 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/70e049a1-02dc-4e15-94c0-0119bbff0af3-sys\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.707508 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707487 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/70e049a1-02dc-4e15-94c0-0119bbff0af3-var-lib-kubelet\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.707508 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707505 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4fed6b9f-295e-4b13-8a53-cddd432bda46-tmp-dir\") pod \"node-resolver-cnp6f\" (UID: \"4fed6b9f-295e-4b13-8a53-cddd432bda46\") " pod="openshift-dns/node-resolver-cnp6f" Apr 23 17:55:00.707956 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707524 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-env-overrides\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.707956 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707545 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/70e049a1-02dc-4e15-94c0-0119bbff0af3-etc-sysconfig\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.707956 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707566 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/70e049a1-02dc-4e15-94c0-0119bbff0af3-host\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.707956 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707609 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/84132246-7311-4103-a045-d865e6d62737-system-cni-dir\") pod \"multus-additional-cni-plugins-4zn98\" (UID: \"84132246-7311-4103-a045-d865e6d62737\") " pod="openshift-multus/multus-additional-cni-plugins-4zn98" Apr 23 17:55:00.707956 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707628 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/84132246-7311-4103-a045-d865e6d62737-tuning-conf-dir\") pod \"multus-additional-cni-plugins-4zn98\" (UID: \"84132246-7311-4103-a045-d865e6d62737\") " pod="openshift-multus/multus-additional-cni-plugins-4zn98" Apr 23 17:55:00.707956 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707644 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-host-run-netns\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.707956 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707658 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70e049a1-02dc-4e15-94c0-0119bbff0af3-lib-modules\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.707956 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707678 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-multus-cni-dir\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.707956 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707695 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-host-var-lib-kubelet\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.707956 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707716 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zks8\" (UniqueName: \"kubernetes.io/projected/70e049a1-02dc-4e15-94c0-0119bbff0af3-kube-api-access-7zks8\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.707956 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707744 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-host-cni-bin\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.707956 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707761 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/424e3ffc-c16a-4133-9d02-752d3ff52059-sys-fs\") pod \"aws-ebs-csi-driver-node-pctgs\" (UID: \"424e3ffc-c16a-4133-9d02-752d3ff52059\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-pctgs" Apr 23 17:55:00.707956 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707775 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-os-release\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.707956 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707798 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5645d713-95ce-41af-878d-48178971c03c-cni-binary-copy\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.707956 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707816 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-host-run-netns\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.707956 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707836 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5645d713-95ce-41af-878d-48178971c03c-multus-daemon-config\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.708388 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707850 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc2r5\" (UniqueName: \"kubernetes.io/projected/5af1b6bf-71a6-4257-9a8a-b48c1c14659c-kube-api-access-hc2r5\") pod \"network-metrics-daemon-45ztw\" (UID: \"5af1b6bf-71a6-4257-9a8a-b48c1c14659c\") " pod="openshift-multus/network-metrics-daemon-45ztw" Apr 23 17:55:00.708388 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707871 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/84132246-7311-4103-a045-d865e6d62737-cni-binary-copy\") pod \"multus-additional-cni-plugins-4zn98\" (UID: \"84132246-7311-4103-a045-d865e6d62737\") " pod="openshift-multus/multus-additional-cni-plugins-4zn98" Apr 23 17:55:00.708388 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707894 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-host-var-lib-cni-bin\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.708388 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707913 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-run-systemd\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.708388 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707928 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/70e049a1-02dc-4e15-94c0-0119bbff0af3-tmp\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.708388 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707947 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7gcb\" (UniqueName: \"kubernetes.io/projected/35ee14f0-f248-4da4-a578-5901f2cd8f5f-kube-api-access-k7gcb\") pod \"network-check-target-88zs6\" (UID: \"35ee14f0-f248-4da4-a578-5901f2cd8f5f\") " pod="openshift-network-diagnostics/network-check-target-88zs6" Apr 23 17:55:00.708388 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707968 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-host-var-lib-cni-multus\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.708388 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.707993 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-multus-conf-dir\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.708388 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.708011 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-host-run-multus-certs\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.708388 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.708024 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-run-ovn\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.708388 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.708038 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/70e049a1-02dc-4e15-94c0-0119bbff0af3-etc-modprobe-d\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.708388 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.708062 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/70e049a1-02dc-4e15-94c0-0119bbff0af3-etc-systemd\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.708388 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.708081 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/424e3ffc-c16a-4133-9d02-752d3ff52059-kubelet-dir\") pod \"aws-ebs-csi-driver-node-pctgs\" (UID: \"424e3ffc-c16a-4133-9d02-752d3ff52059\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-pctgs" Apr 23 17:55:00.708388 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.708096 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-multus-socket-dir-parent\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.708388 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.708110 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-host-run-k8s-cni-cncf-io\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.708388 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.708131 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwv4s\" (UniqueName: \"kubernetes.io/projected/84132246-7311-4103-a045-d865e6d62737-kube-api-access-jwv4s\") pod \"multus-additional-cni-plugins-4zn98\" (UID: \"84132246-7311-4103-a045-d865e6d62737\") " pod="openshift-multus/multus-additional-cni-plugins-4zn98" Apr 23 17:55:00.708836 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.708149 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-ovnkube-script-lib\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.708836 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.708164 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/e6422dbb-93b9-4284-b862-8a2613b43681-iptables-alerter-script\") pod \"iptables-alerter-z254x\" (UID: \"e6422dbb-93b9-4284-b862-8a2613b43681\") " pod="openshift-network-operator/iptables-alerter-z254x" Apr 23 17:55:00.708836 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.708178 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e6422dbb-93b9-4284-b862-8a2613b43681-host-slash\") pod \"iptables-alerter-z254x\" (UID: \"e6422dbb-93b9-4284-b862-8a2613b43681\") " pod="openshift-network-operator/iptables-alerter-z254x" Apr 23 17:55:00.708836 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.708193 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/424e3ffc-c16a-4133-9d02-752d3ff52059-etc-selinux\") pod \"aws-ebs-csi-driver-node-pctgs\" (UID: \"424e3ffc-c16a-4133-9d02-752d3ff52059\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-pctgs" Apr 23 17:55:00.708836 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.708222 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-systemd-units\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.708836 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.708241 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-var-lib-openvswitch\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.708836 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.708257 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-ovnkube-config\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.708836 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.708272 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkd59\" (UniqueName: \"kubernetes.io/projected/9e31c73e-77bf-4968-b370-f732e248be97-kube-api-access-rkd59\") pod \"node-ca-xcs8j\" (UID: \"9e31c73e-77bf-4968-b370-f732e248be97\") " pod="openshift-image-registry/node-ca-xcs8j" Apr 23 17:55:00.708836 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.708292 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-924d7\" (UniqueName: \"kubernetes.io/projected/e6422dbb-93b9-4284-b862-8a2613b43681-kube-api-access-924d7\") pod \"iptables-alerter-z254x\" (UID: \"e6422dbb-93b9-4284-b862-8a2613b43681\") " pod="openshift-network-operator/iptables-alerter-z254x" Apr 23 17:55:00.809298 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809263 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/84132246-7311-4103-a045-d865e6d62737-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-4zn98\" (UID: \"84132246-7311-4103-a045-d865e6d62737\") " pod="openshift-multus/multus-additional-cni-plugins-4zn98" Apr 23 17:55:00.809298 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809300 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-run-openvswitch\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.809459 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809317 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.809459 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809331 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9e31c73e-77bf-4968-b370-f732e248be97-host\") pod \"node-ca-xcs8j\" (UID: \"9e31c73e-77bf-4968-b370-f732e248be97\") " pod="openshift-image-registry/node-ca-xcs8j" Apr 23 17:55:00.809459 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809346 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/0dbc6169-c545-4cc9-a3ea-83c161f64108-konnectivity-ca\") pod \"konnectivity-agent-wx25k\" (UID: \"0dbc6169-c545-4cc9-a3ea-83c161f64108\") " pod="kube-system/konnectivity-agent-wx25k" Apr 23 17:55:00.809459 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809362 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb-kubelet-config\") pod \"global-pull-secret-syncer-p5ndb\" (UID: \"f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb\") " pod="kube-system/global-pull-secret-syncer-p5ndb" Apr 23 17:55:00.809459 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809379 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.809459 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809398 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9e31c73e-77bf-4968-b370-f732e248be97-host\") pod \"node-ca-xcs8j\" (UID: \"9e31c73e-77bf-4968-b370-f732e248be97\") " pod="openshift-image-registry/node-ca-xcs8j" Apr 23 17:55:00.809459 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809384 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb-dbus\") pod \"global-pull-secret-syncer-p5ndb\" (UID: \"f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb\") " pod="kube-system/global-pull-secret-syncer-p5ndb" Apr 23 17:55:00.809459 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809437 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb-kubelet-config\") pod \"global-pull-secret-syncer-p5ndb\" (UID: \"f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb\") " pod="kube-system/global-pull-secret-syncer-p5ndb" Apr 23 17:55:00.809459 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809379 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-run-openvswitch\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.809459 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809453 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-host-cni-netd\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.809906 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809480 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5af1b6bf-71a6-4257-9a8a-b48c1c14659c-metrics-certs\") pod \"network-metrics-daemon-45ztw\" (UID: \"5af1b6bf-71a6-4257-9a8a-b48c1c14659c\") " pod="openshift-multus/network-metrics-daemon-45ztw" Apr 23 17:55:00.809906 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809501 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-host-kubelet\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.809906 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809511 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb-dbus\") pod \"global-pull-secret-syncer-p5ndb\" (UID: \"f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb\") " pod="kube-system/global-pull-secret-syncer-p5ndb" Apr 23 17:55:00.809906 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809525 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/70e049a1-02dc-4e15-94c0-0119bbff0af3-etc-sysctl-conf\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.809906 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809548 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4fed6b9f-295e-4b13-8a53-cddd432bda46-hosts-file\") pod \"node-resolver-cnp6f\" (UID: \"4fed6b9f-295e-4b13-8a53-cddd432bda46\") " pod="openshift-dns/node-resolver-cnp6f" Apr 23 17:55:00.809906 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809555 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-host-cni-netd\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.809906 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809589 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-host-slash\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.809906 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809592 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-host-kubelet\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.809906 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809616 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-etc-kubernetes\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.809906 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809619 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4fed6b9f-295e-4b13-8a53-cddd432bda46-hosts-file\") pod \"node-resolver-cnp6f\" (UID: \"4fed6b9f-295e-4b13-8a53-cddd432bda46\") " pod="openshift-dns/node-resolver-cnp6f" Apr 23 17:55:00.809906 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:00.809636 2574 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:00.809906 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809646 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-host-slash\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.809906 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809661 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-etc-kubernetes\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.809906 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809687 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/70e049a1-02dc-4e15-94c0-0119bbff0af3-etc-sysctl-conf\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.809906 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809690 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-cnibin\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.809906 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809639 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-cnibin\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.809906 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:00.809718 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5af1b6bf-71a6-4257-9a8a-b48c1c14659c-metrics-certs podName:5af1b6bf-71a6-4257-9a8a-b48c1c14659c nodeName:}" failed. No retries permitted until 2026-04-23 17:55:01.309688086 +0000 UTC m=+139.207141863 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5af1b6bf-71a6-4257-9a8a-b48c1c14659c-metrics-certs") pod "network-metrics-daemon-45ztw" (UID: "5af1b6bf-71a6-4257-9a8a-b48c1c14659c") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:00.809906 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809763 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xxrr4\" (UniqueName: \"kubernetes.io/projected/5645d713-95ce-41af-878d-48178971c03c-kube-api-access-xxrr4\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.810717 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809795 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/84132246-7311-4103-a045-d865e6d62737-os-release\") pod \"multus-additional-cni-plugins-4zn98\" (UID: \"84132246-7311-4103-a045-d865e6d62737\") " pod="openshift-multus/multus-additional-cni-plugins-4zn98" Apr 23 17:55:00.810717 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809821 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/70e049a1-02dc-4e15-94c0-0119bbff0af3-run\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.810717 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809847 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cdfgg\" (UniqueName: \"kubernetes.io/projected/4fed6b9f-295e-4b13-8a53-cddd432bda46-kube-api-access-cdfgg\") pod \"node-resolver-cnp6f\" (UID: \"4fed6b9f-295e-4b13-8a53-cddd432bda46\") " pod="openshift-dns/node-resolver-cnp6f" Apr 23 17:55:00.810717 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809866 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/84132246-7311-4103-a045-d865e6d62737-os-release\") pod \"multus-additional-cni-plugins-4zn98\" (UID: \"84132246-7311-4103-a045-d865e6d62737\") " pod="openshift-multus/multus-additional-cni-plugins-4zn98" Apr 23 17:55:00.810717 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809871 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/424e3ffc-c16a-4133-9d02-752d3ff52059-registration-dir\") pod \"aws-ebs-csi-driver-node-pctgs\" (UID: \"424e3ffc-c16a-4133-9d02-752d3ff52059\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-pctgs" Apr 23 17:55:00.810717 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809892 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/70e049a1-02dc-4e15-94c0-0119bbff0af3-run\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.810717 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809911 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/70e049a1-02dc-4e15-94c0-0119bbff0af3-etc-sysctl-d\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.810717 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809919 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/0dbc6169-c545-4cc9-a3ea-83c161f64108-konnectivity-ca\") pod \"konnectivity-agent-wx25k\" (UID: \"0dbc6169-c545-4cc9-a3ea-83c161f64108\") " pod="kube-system/konnectivity-agent-wx25k" Apr 23 17:55:00.810717 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809939 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/70e049a1-02dc-4e15-94c0-0119bbff0af3-sys\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.810717 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809940 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/84132246-7311-4103-a045-d865e6d62737-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-4zn98\" (UID: \"84132246-7311-4103-a045-d865e6d62737\") " pod="openshift-multus/multus-additional-cni-plugins-4zn98" Apr 23 17:55:00.810717 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809975 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/70e049a1-02dc-4e15-94c0-0119bbff0af3-var-lib-kubelet\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.810717 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.809979 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/70e049a1-02dc-4e15-94c0-0119bbff0af3-sys\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.810717 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810002 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4fed6b9f-295e-4b13-8a53-cddd432bda46-tmp-dir\") pod \"node-resolver-cnp6f\" (UID: \"4fed6b9f-295e-4b13-8a53-cddd432bda46\") " pod="openshift-dns/node-resolver-cnp6f" Apr 23 17:55:00.810717 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810025 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/70e049a1-02dc-4e15-94c0-0119bbff0af3-var-lib-kubelet\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.810717 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810029 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-env-overrides\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.810717 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810053 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/70e049a1-02dc-4e15-94c0-0119bbff0af3-etc-sysconfig\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.810717 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810077 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/70e049a1-02dc-4e15-94c0-0119bbff0af3-host\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.810717 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810069 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/70e049a1-02dc-4e15-94c0-0119bbff0af3-etc-sysctl-d\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.811483 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810054 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/424e3ffc-c16a-4133-9d02-752d3ff52059-registration-dir\") pod \"aws-ebs-csi-driver-node-pctgs\" (UID: \"424e3ffc-c16a-4133-9d02-752d3ff52059\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-pctgs" Apr 23 17:55:00.811483 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810100 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/70e049a1-02dc-4e15-94c0-0119bbff0af3-etc-sysconfig\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.811483 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810120 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/70e049a1-02dc-4e15-94c0-0119bbff0af3-host\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.811483 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810147 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/84132246-7311-4103-a045-d865e6d62737-system-cni-dir\") pod \"multus-additional-cni-plugins-4zn98\" (UID: \"84132246-7311-4103-a045-d865e6d62737\") " pod="openshift-multus/multus-additional-cni-plugins-4zn98" Apr 23 17:55:00.811483 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810166 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/84132246-7311-4103-a045-d865e6d62737-tuning-conf-dir\") pod \"multus-additional-cni-plugins-4zn98\" (UID: \"84132246-7311-4103-a045-d865e6d62737\") " pod="openshift-multus/multus-additional-cni-plugins-4zn98" Apr 23 17:55:00.811483 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810193 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-host-run-netns\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.811483 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810203 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/84132246-7311-4103-a045-d865e6d62737-system-cni-dir\") pod \"multus-additional-cni-plugins-4zn98\" (UID: \"84132246-7311-4103-a045-d865e6d62737\") " pod="openshift-multus/multus-additional-cni-plugins-4zn98" Apr 23 17:55:00.811483 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810217 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70e049a1-02dc-4e15-94c0-0119bbff0af3-lib-modules\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.811483 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810240 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-multus-cni-dir\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.811483 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810254 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-host-run-netns\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.811483 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810263 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4fed6b9f-295e-4b13-8a53-cddd432bda46-tmp-dir\") pod \"node-resolver-cnp6f\" (UID: \"4fed6b9f-295e-4b13-8a53-cddd432bda46\") " pod="openshift-dns/node-resolver-cnp6f" Apr 23 17:55:00.811483 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810290 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-host-var-lib-kubelet\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.811483 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810265 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-host-var-lib-kubelet\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.811483 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810309 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70e049a1-02dc-4e15-94c0-0119bbff0af3-lib-modules\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.811483 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810316 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7zks8\" (UniqueName: \"kubernetes.io/projected/70e049a1-02dc-4e15-94c0-0119bbff0af3-kube-api-access-7zks8\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.811483 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810308 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/84132246-7311-4103-a045-d865e6d62737-tuning-conf-dir\") pod \"multus-additional-cni-plugins-4zn98\" (UID: \"84132246-7311-4103-a045-d865e6d62737\") " pod="openshift-multus/multus-additional-cni-plugins-4zn98" Apr 23 17:55:00.811483 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810340 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-host-cni-bin\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.812316 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810377 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-host-cni-bin\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.812316 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810397 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/424e3ffc-c16a-4133-9d02-752d3ff52059-sys-fs\") pod \"aws-ebs-csi-driver-node-pctgs\" (UID: \"424e3ffc-c16a-4133-9d02-752d3ff52059\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-pctgs" Apr 23 17:55:00.812316 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810403 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-multus-cni-dir\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.812316 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810429 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-os-release\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.812316 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810443 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-env-overrides\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.812316 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810465 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5645d713-95ce-41af-878d-48178971c03c-cni-binary-copy\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.812316 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810505 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-host-run-netns\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.812316 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810478 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-os-release\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.812316 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810476 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/424e3ffc-c16a-4133-9d02-752d3ff52059-sys-fs\") pod \"aws-ebs-csi-driver-node-pctgs\" (UID: \"424e3ffc-c16a-4133-9d02-752d3ff52059\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-pctgs" Apr 23 17:55:00.812316 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810548 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5645d713-95ce-41af-878d-48178971c03c-multus-daemon-config\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.812316 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810556 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-host-run-netns\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.812316 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810591 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hc2r5\" (UniqueName: \"kubernetes.io/projected/5af1b6bf-71a6-4257-9a8a-b48c1c14659c-kube-api-access-hc2r5\") pod \"network-metrics-daemon-45ztw\" (UID: \"5af1b6bf-71a6-4257-9a8a-b48c1c14659c\") " pod="openshift-multus/network-metrics-daemon-45ztw" Apr 23 17:55:00.812316 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810615 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/84132246-7311-4103-a045-d865e6d62737-cni-binary-copy\") pod \"multus-additional-cni-plugins-4zn98\" (UID: \"84132246-7311-4103-a045-d865e6d62737\") " pod="openshift-multus/multus-additional-cni-plugins-4zn98" Apr 23 17:55:00.812316 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810642 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-host-var-lib-cni-bin\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.812316 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810667 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-run-systemd\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.812316 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810691 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/70e049a1-02dc-4e15-94c0-0119bbff0af3-tmp\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.812316 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810714 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k7gcb\" (UniqueName: \"kubernetes.io/projected/35ee14f0-f248-4da4-a578-5901f2cd8f5f-kube-api-access-k7gcb\") pod \"network-check-target-88zs6\" (UID: \"35ee14f0-f248-4da4-a578-5901f2cd8f5f\") " pod="openshift-network-diagnostics/network-check-target-88zs6" Apr 23 17:55:00.812316 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810736 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-host-var-lib-cni-bin\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.812865 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810737 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-host-var-lib-cni-multus\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.812865 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810759 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-run-systemd\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.812865 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810780 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-multus-conf-dir\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.812865 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810802 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-host-var-lib-cni-multus\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.812865 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810811 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-host-run-multus-certs\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.812865 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810844 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-run-ovn\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.812865 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810848 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-host-run-multus-certs\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.812865 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810886 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-multus-conf-dir\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.812865 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810922 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-run-ovn\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.812865 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810953 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/70e049a1-02dc-4e15-94c0-0119bbff0af3-etc-modprobe-d\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.812865 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810974 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/70e049a1-02dc-4e15-94c0-0119bbff0af3-etc-systemd\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.812865 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.810995 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/424e3ffc-c16a-4133-9d02-752d3ff52059-kubelet-dir\") pod \"aws-ebs-csi-driver-node-pctgs\" (UID: \"424e3ffc-c16a-4133-9d02-752d3ff52059\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-pctgs" Apr 23 17:55:00.812865 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811001 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5645d713-95ce-41af-878d-48178971c03c-cni-binary-copy\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.812865 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811020 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-multus-socket-dir-parent\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.812865 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811023 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5645d713-95ce-41af-878d-48178971c03c-multus-daemon-config\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.812865 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811032 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/70e049a1-02dc-4e15-94c0-0119bbff0af3-etc-systemd\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.812865 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811042 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-host-run-k8s-cni-cncf-io\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.812865 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811058 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/70e049a1-02dc-4e15-94c0-0119bbff0af3-etc-modprobe-d\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.813397 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811081 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-host-run-k8s-cni-cncf-io\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.813397 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811089 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/424e3ffc-c16a-4133-9d02-752d3ff52059-kubelet-dir\") pod \"aws-ebs-csi-driver-node-pctgs\" (UID: \"424e3ffc-c16a-4133-9d02-752d3ff52059\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-pctgs" Apr 23 17:55:00.813397 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811093 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-multus-socket-dir-parent\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.813397 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811119 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jwv4s\" (UniqueName: \"kubernetes.io/projected/84132246-7311-4103-a045-d865e6d62737-kube-api-access-jwv4s\") pod \"multus-additional-cni-plugins-4zn98\" (UID: \"84132246-7311-4103-a045-d865e6d62737\") " pod="openshift-multus/multus-additional-cni-plugins-4zn98" Apr 23 17:55:00.813397 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811138 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-ovnkube-script-lib\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.813397 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811147 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/84132246-7311-4103-a045-d865e6d62737-cni-binary-copy\") pod \"multus-additional-cni-plugins-4zn98\" (UID: \"84132246-7311-4103-a045-d865e6d62737\") " pod="openshift-multus/multus-additional-cni-plugins-4zn98" Apr 23 17:55:00.813397 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811159 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/e6422dbb-93b9-4284-b862-8a2613b43681-iptables-alerter-script\") pod \"iptables-alerter-z254x\" (UID: \"e6422dbb-93b9-4284-b862-8a2613b43681\") " pod="openshift-network-operator/iptables-alerter-z254x" Apr 23 17:55:00.813397 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811180 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e6422dbb-93b9-4284-b862-8a2613b43681-host-slash\") pod \"iptables-alerter-z254x\" (UID: \"e6422dbb-93b9-4284-b862-8a2613b43681\") " pod="openshift-network-operator/iptables-alerter-z254x" Apr 23 17:55:00.813397 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811205 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/424e3ffc-c16a-4133-9d02-752d3ff52059-etc-selinux\") pod \"aws-ebs-csi-driver-node-pctgs\" (UID: \"424e3ffc-c16a-4133-9d02-752d3ff52059\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-pctgs" Apr 23 17:55:00.813397 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811229 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-systemd-units\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.813397 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811246 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e6422dbb-93b9-4284-b862-8a2613b43681-host-slash\") pod \"iptables-alerter-z254x\" (UID: \"e6422dbb-93b9-4284-b862-8a2613b43681\") " pod="openshift-network-operator/iptables-alerter-z254x" Apr 23 17:55:00.813397 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811253 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-var-lib-openvswitch\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.813397 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811292 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-systemd-units\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.813397 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811304 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-ovnkube-config\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.813397 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811329 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rkd59\" (UniqueName: \"kubernetes.io/projected/9e31c73e-77bf-4968-b370-f732e248be97-kube-api-access-rkd59\") pod \"node-ca-xcs8j\" (UID: \"9e31c73e-77bf-4968-b370-f732e248be97\") " pod="openshift-image-registry/node-ca-xcs8j" Apr 23 17:55:00.813397 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811331 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-var-lib-openvswitch\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.813397 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811326 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/424e3ffc-c16a-4133-9d02-752d3ff52059-etc-selinux\") pod \"aws-ebs-csi-driver-node-pctgs\" (UID: \"424e3ffc-c16a-4133-9d02-752d3ff52059\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-pctgs" Apr 23 17:55:00.813878 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811357 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-924d7\" (UniqueName: \"kubernetes.io/projected/e6422dbb-93b9-4284-b862-8a2613b43681-kube-api-access-924d7\") pod \"iptables-alerter-z254x\" (UID: \"e6422dbb-93b9-4284-b862-8a2613b43681\") " pod="openshift-network-operator/iptables-alerter-z254x" Apr 23 17:55:00.813878 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811363 2574 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Apr 23 17:55:00.813878 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811407 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/424e3ffc-c16a-4133-9d02-752d3ff52059-socket-dir\") pod \"aws-ebs-csi-driver-node-pctgs\" (UID: \"424e3ffc-c16a-4133-9d02-752d3ff52059\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-pctgs" Apr 23 17:55:00.813878 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811437 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/84132246-7311-4103-a045-d865e6d62737-cnibin\") pod \"multus-additional-cni-plugins-4zn98\" (UID: \"84132246-7311-4103-a045-d865e6d62737\") " pod="openshift-multus/multus-additional-cni-plugins-4zn98" Apr 23 17:55:00.813878 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811464 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/84132246-7311-4103-a045-d865e6d62737-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-4zn98\" (UID: \"84132246-7311-4103-a045-d865e6d62737\") " pod="openshift-multus/multus-additional-cni-plugins-4zn98" Apr 23 17:55:00.813878 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811489 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb-original-pull-secret\") pod \"global-pull-secret-syncer-p5ndb\" (UID: \"f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb\") " pod="kube-system/global-pull-secret-syncer-p5ndb" Apr 23 17:55:00.813878 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811517 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-system-cni-dir\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.813878 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811531 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/84132246-7311-4103-a045-d865e6d62737-cnibin\") pod \"multus-additional-cni-plugins-4zn98\" (UID: \"84132246-7311-4103-a045-d865e6d62737\") " pod="openshift-multus/multus-additional-cni-plugins-4zn98" Apr 23 17:55:00.813878 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811541 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-hostroot\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.813878 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811566 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-etc-openvswitch\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.813878 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811610 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-host-run-ovn-kubernetes\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.813878 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811623 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-ovnkube-script-lib\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.813878 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811634 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-26rwf\" (UniqueName: \"kubernetes.io/projected/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-kube-api-access-26rwf\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.813878 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:00.811651 2574 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:55:00.813878 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811658 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/424e3ffc-c16a-4133-9d02-752d3ff52059-socket-dir\") pod \"aws-ebs-csi-driver-node-pctgs\" (UID: \"424e3ffc-c16a-4133-9d02-752d3ff52059\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-pctgs" Apr 23 17:55:00.813878 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811667 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/70e049a1-02dc-4e15-94c0-0119bbff0af3-etc-tuned\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.813878 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811694 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f8vl6\" (UniqueName: \"kubernetes.io/projected/424e3ffc-c16a-4133-9d02-752d3ff52059-kube-api-access-f8vl6\") pod \"aws-ebs-csi-driver-node-pctgs\" (UID: \"424e3ffc-c16a-4133-9d02-752d3ff52059\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-pctgs" Apr 23 17:55:00.813878 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811724 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/0dbc6169-c545-4cc9-a3ea-83c161f64108-agent-certs\") pod \"konnectivity-agent-wx25k\" (UID: \"0dbc6169-c545-4cc9-a3ea-83c161f64108\") " pod="kube-system/konnectivity-agent-wx25k" Apr 23 17:55:00.814368 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811732 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/e6422dbb-93b9-4284-b862-8a2613b43681-iptables-alerter-script\") pod \"iptables-alerter-z254x\" (UID: \"e6422dbb-93b9-4284-b862-8a2613b43681\") " pod="openshift-network-operator/iptables-alerter-z254x" Apr 23 17:55:00.814368 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811737 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-system-cni-dir\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.814368 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811698 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5645d713-95ce-41af-878d-48178971c03c-hostroot\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.814368 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:00.811755 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb-original-pull-secret podName:f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb nodeName:}" failed. No retries permitted until 2026-04-23 17:55:01.311740442 +0000 UTC m=+139.209194206 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb-original-pull-secret") pod "global-pull-secret-syncer-p5ndb" (UID: "f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:55:00.814368 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811772 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-node-log\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.814368 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811794 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-node-log\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.814368 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811796 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-ovn-node-metrics-cert\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.814368 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811832 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70e049a1-02dc-4e15-94c0-0119bbff0af3-etc-kubernetes\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.814368 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811856 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9e31c73e-77bf-4968-b370-f732e248be97-serviceca\") pod \"node-ca-xcs8j\" (UID: \"9e31c73e-77bf-4968-b370-f732e248be97\") " pod="openshift-image-registry/node-ca-xcs8j" Apr 23 17:55:00.814368 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811884 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-log-socket\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.814368 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811890 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-host-run-ovn-kubernetes\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.814368 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811907 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/424e3ffc-c16a-4133-9d02-752d3ff52059-device-dir\") pod \"aws-ebs-csi-driver-node-pctgs\" (UID: \"424e3ffc-c16a-4133-9d02-752d3ff52059\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-pctgs" Apr 23 17:55:00.814368 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811964 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-ovnkube-config\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.814368 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811976 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/424e3ffc-c16a-4133-9d02-752d3ff52059-device-dir\") pod \"aws-ebs-csi-driver-node-pctgs\" (UID: \"424e3ffc-c16a-4133-9d02-752d3ff52059\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-pctgs" Apr 23 17:55:00.814368 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.811856 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-etc-openvswitch\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.814368 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.812024 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-log-socket\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.814368 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.812065 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70e049a1-02dc-4e15-94c0-0119bbff0af3-etc-kubernetes\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.814966 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.812188 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/84132246-7311-4103-a045-d865e6d62737-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-4zn98\" (UID: \"84132246-7311-4103-a045-d865e6d62737\") " pod="openshift-multus/multus-additional-cni-plugins-4zn98" Apr 23 17:55:00.814966 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.812364 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9e31c73e-77bf-4968-b370-f732e248be97-serviceca\") pod \"node-ca-xcs8j\" (UID: \"9e31c73e-77bf-4968-b370-f732e248be97\") " pod="openshift-image-registry/node-ca-xcs8j" Apr 23 17:55:00.815119 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.815100 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/70e049a1-02dc-4e15-94c0-0119bbff0af3-etc-tuned\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.815196 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.815177 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/70e049a1-02dc-4e15-94c0-0119bbff0af3-tmp\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.815252 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.815237 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-ovn-node-metrics-cert\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.815364 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.815349 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/0dbc6169-c545-4cc9-a3ea-83c161f64108-agent-certs\") pod \"konnectivity-agent-wx25k\" (UID: \"0dbc6169-c545-4cc9-a3ea-83c161f64108\") " pod="kube-system/konnectivity-agent-wx25k" Apr 23 17:55:00.823982 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.823957 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zks8\" (UniqueName: \"kubernetes.io/projected/70e049a1-02dc-4e15-94c0-0119bbff0af3-kube-api-access-7zks8\") pod \"tuned-fggsl\" (UID: \"70e049a1-02dc-4e15-94c0-0119bbff0af3\") " pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.825282 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:00.825265 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:55:00.825282 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:00.825283 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:55:00.825418 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:00.825292 2574 projected.go:194] Error preparing data for projected volume kube-api-access-k7gcb for pod openshift-network-diagnostics/network-check-target-88zs6: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:00.825418 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:00.825344 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35ee14f0-f248-4da4-a578-5901f2cd8f5f-kube-api-access-k7gcb podName:35ee14f0-f248-4da4-a578-5901f2cd8f5f nodeName:}" failed. No retries permitted until 2026-04-23 17:55:01.325330126 +0000 UTC m=+139.222783875 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-k7gcb" (UniqueName: "kubernetes.io/projected/35ee14f0-f248-4da4-a578-5901f2cd8f5f-kube-api-access-k7gcb") pod "network-check-target-88zs6" (UID: "35ee14f0-f248-4da4-a578-5901f2cd8f5f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:00.827414 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.827376 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc2r5\" (UniqueName: \"kubernetes.io/projected/5af1b6bf-71a6-4257-9a8a-b48c1c14659c-kube-api-access-hc2r5\") pod \"network-metrics-daemon-45ztw\" (UID: \"5af1b6bf-71a6-4257-9a8a-b48c1c14659c\") " pod="openshift-multus/network-metrics-daemon-45ztw" Apr 23 17:55:00.831007 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.830986 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdfgg\" (UniqueName: \"kubernetes.io/projected/4fed6b9f-295e-4b13-8a53-cddd432bda46-kube-api-access-cdfgg\") pod \"node-resolver-cnp6f\" (UID: \"4fed6b9f-295e-4b13-8a53-cddd432bda46\") " pod="openshift-dns/node-resolver-cnp6f" Apr 23 17:55:00.831862 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.831847 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-924d7\" (UniqueName: \"kubernetes.io/projected/e6422dbb-93b9-4284-b862-8a2613b43681-kube-api-access-924d7\") pod \"iptables-alerter-z254x\" (UID: \"e6422dbb-93b9-4284-b862-8a2613b43681\") " pod="openshift-network-operator/iptables-alerter-z254x" Apr 23 17:55:00.834878 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.834862 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-26rwf\" (UniqueName: \"kubernetes.io/projected/11c76c2e-7e8d-4076-bf3e-40c9a12aad39-kube-api-access-26rwf\") pod \"ovnkube-node-2kfrf\" (UID: \"11c76c2e-7e8d-4076-bf3e-40c9a12aad39\") " pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.847192 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.847173 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkd59\" (UniqueName: \"kubernetes.io/projected/9e31c73e-77bf-4968-b370-f732e248be97-kube-api-access-rkd59\") pod \"node-ca-xcs8j\" (UID: \"9e31c73e-77bf-4968-b370-f732e248be97\") " pod="openshift-image-registry/node-ca-xcs8j" Apr 23 17:55:00.856164 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.856144 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwv4s\" (UniqueName: \"kubernetes.io/projected/84132246-7311-4103-a045-d865e6d62737-kube-api-access-jwv4s\") pod \"multus-additional-cni-plugins-4zn98\" (UID: \"84132246-7311-4103-a045-d865e6d62737\") " pod="openshift-multus/multus-additional-cni-plugins-4zn98" Apr 23 17:55:00.866904 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.866883 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxrr4\" (UniqueName: \"kubernetes.io/projected/5645d713-95ce-41af-878d-48178971c03c-kube-api-access-xxrr4\") pod \"multus-5bjmz\" (UID: \"5645d713-95ce-41af-878d-48178971c03c\") " pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.876003 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.875983 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8vl6\" (UniqueName: \"kubernetes.io/projected/424e3ffc-c16a-4133-9d02-752d3ff52059-kube-api-access-f8vl6\") pod \"aws-ebs-csi-driver-node-pctgs\" (UID: \"424e3ffc-c16a-4133-9d02-752d3ff52059\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-pctgs" Apr 23 17:55:00.885452 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.885434 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:00.889967 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.889952 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-z254x" Apr 23 17:55:00.891963 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:55:00.891945 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11c76c2e_7e8d_4076_bf3e_40c9a12aad39.slice/crio-47b88e2f734311dcc6f20c35a66767a2f5858d9cff365bc95f941237742de89c WatchSource:0}: Error finding container 47b88e2f734311dcc6f20c35a66767a2f5858d9cff365bc95f941237742de89c: Status 404 returned error can't find the container with id 47b88e2f734311dcc6f20c35a66767a2f5858d9cff365bc95f941237742de89c Apr 23 17:55:00.895058 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.895037 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-pctgs" Apr 23 17:55:00.896086 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:55:00.896063 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6422dbb_93b9_4284_b862_8a2613b43681.slice/crio-081f299826b51cb16b81c6466890ff99450fa5488f9f6813c1b0daa041b9883c WatchSource:0}: Error finding container 081f299826b51cb16b81c6466890ff99450fa5488f9f6813c1b0daa041b9883c: Status 404 returned error can't find the container with id 081f299826b51cb16b81c6466890ff99450fa5488f9f6813c1b0daa041b9883c Apr 23 17:55:00.900627 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.900610 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-wx25k" Apr 23 17:55:00.900803 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:55:00.900784 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod424e3ffc_c16a_4133_9d02_752d3ff52059.slice/crio-888f7a0f3a06e157ebc25f52cd510df3838504e3f001bc2d9759d1e976e5a642 WatchSource:0}: Error finding container 888f7a0f3a06e157ebc25f52cd510df3838504e3f001bc2d9759d1e976e5a642: Status 404 returned error can't find the container with id 888f7a0f3a06e157ebc25f52cd510df3838504e3f001bc2d9759d1e976e5a642 Apr 23 17:55:00.905941 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.905925 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-fggsl" Apr 23 17:55:00.907750 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:55:00.907361 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0dbc6169_c545_4cc9_a3ea_83c161f64108.slice/crio-004897bf5128f43a9a0b614dc49732ed903177c17d5b32a40497e5fed69aada3 WatchSource:0}: Error finding container 004897bf5128f43a9a0b614dc49732ed903177c17d5b32a40497e5fed69aada3: Status 404 returned error can't find the container with id 004897bf5128f43a9a0b614dc49732ed903177c17d5b32a40497e5fed69aada3 Apr 23 17:55:00.910764 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.910633 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-cnp6f" Apr 23 17:55:00.912972 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:55:00.912950 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70e049a1_02dc_4e15_94c0_0119bbff0af3.slice/crio-c92145e2b2d18b4921ae66cc98e193b674fdc57d9477bcc4cb6adeab8c4f9425 WatchSource:0}: Error finding container c92145e2b2d18b4921ae66cc98e193b674fdc57d9477bcc4cb6adeab8c4f9425: Status 404 returned error can't find the container with id c92145e2b2d18b4921ae66cc98e193b674fdc57d9477bcc4cb6adeab8c4f9425 Apr 23 17:55:00.915781 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.915763 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-5bjmz" Apr 23 17:55:00.916980 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:55:00.916962 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fed6b9f_295e_4b13_8a53_cddd432bda46.slice/crio-3bb269a3fcd4c5922348c6cce0c8cb6fe6e651d9126b24a135d35479d8509563 WatchSource:0}: Error finding container 3bb269a3fcd4c5922348c6cce0c8cb6fe6e651d9126b24a135d35479d8509563: Status 404 returned error can't find the container with id 3bb269a3fcd4c5922348c6cce0c8cb6fe6e651d9126b24a135d35479d8509563 Apr 23 17:55:00.920872 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.920853 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-4zn98" Apr 23 17:55:00.921125 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:55:00.921103 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5645d713_95ce_41af_878d_48178971c03c.slice/crio-95192f79b273979f6d62436a2e465184630fa692c5a290200ddc425d653e6cd4 WatchSource:0}: Error finding container 95192f79b273979f6d62436a2e465184630fa692c5a290200ddc425d653e6cd4: Status 404 returned error can't find the container with id 95192f79b273979f6d62436a2e465184630fa692c5a290200ddc425d653e6cd4 Apr 23 17:55:00.924823 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.924808 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xcs8j" Apr 23 17:55:00.926463 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:55:00.926433 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84132246_7311_4103_a045_d865e6d62737.slice/crio-7f5541048fb7e0e3b818ad12bbf127a586bf851c07396c49497ed4c20ed04fc5 WatchSource:0}: Error finding container 7f5541048fb7e0e3b818ad12bbf127a586bf851c07396c49497ed4c20ed04fc5: Status 404 returned error can't find the container with id 7f5541048fb7e0e3b818ad12bbf127a586bf851c07396c49497ed4c20ed04fc5 Apr 23 17:55:00.932349 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:55:00.932332 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e31c73e_77bf_4968_b370_f732e248be97.slice/crio-c869d0046c3afb0bcedd54309e27672cc1fb39184a6533e483dda0e09af953f1 WatchSource:0}: Error finding container c869d0046c3afb0bcedd54309e27672cc1fb39184a6533e483dda0e09af953f1: Status 404 returned error can't find the container with id c869d0046c3afb0bcedd54309e27672cc1fb39184a6533e483dda0e09af953f1 Apr 23 17:55:00.994298 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:00.994252 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal" podStartSLOduration=0.994240322 podStartE2EDuration="994.240322ms" podCreationTimestamp="2026-04-23 17:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:55:00.955495695 +0000 UTC m=+138.852949465" watchObservedRunningTime="2026-04-23 17:55:00.994240322 +0000 UTC m=+138.891694094" Apr 23 17:55:01.027086 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:01.027052 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-proxy-ip-10-0-142-106.ec2.internal" podStartSLOduration=1.02704236 podStartE2EDuration="1.02704236s" podCreationTimestamp="2026-04-23 17:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:55:00.995263088 +0000 UTC m=+138.892716858" watchObservedRunningTime="2026-04-23 17:55:01.02704236 +0000 UTC m=+138.924496129" Apr 23 17:55:01.314157 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:01.314124 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb-original-pull-secret\") pod \"global-pull-secret-syncer-p5ndb\" (UID: \"f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb\") " pod="kube-system/global-pull-secret-syncer-p5ndb" Apr 23 17:55:01.314922 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:01.314188 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5af1b6bf-71a6-4257-9a8a-b48c1c14659c-metrics-certs\") pod \"network-metrics-daemon-45ztw\" (UID: \"5af1b6bf-71a6-4257-9a8a-b48c1c14659c\") " pod="openshift-multus/network-metrics-daemon-45ztw" Apr 23 17:55:01.314922 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:01.314363 2574 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:01.314922 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:01.314421 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5af1b6bf-71a6-4257-9a8a-b48c1c14659c-metrics-certs podName:5af1b6bf-71a6-4257-9a8a-b48c1c14659c nodeName:}" failed. No retries permitted until 2026-04-23 17:55:02.314403855 +0000 UTC m=+140.211857614 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5af1b6bf-71a6-4257-9a8a-b48c1c14659c-metrics-certs") pod "network-metrics-daemon-45ztw" (UID: "5af1b6bf-71a6-4257-9a8a-b48c1c14659c") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:01.314922 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:01.314803 2574 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:55:01.314922 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:01.314856 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb-original-pull-secret podName:f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb nodeName:}" failed. No retries permitted until 2026-04-23 17:55:02.314840197 +0000 UTC m=+140.212293948 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb-original-pull-secret") pod "global-pull-secret-syncer-p5ndb" (UID: "f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:55:01.414798 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:01.414762 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k7gcb\" (UniqueName: \"kubernetes.io/projected/35ee14f0-f248-4da4-a578-5901f2cd8f5f-kube-api-access-k7gcb\") pod \"network-check-target-88zs6\" (UID: \"35ee14f0-f248-4da4-a578-5901f2cd8f5f\") " pod="openshift-network-diagnostics/network-check-target-88zs6" Apr 23 17:55:01.414952 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:01.414940 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:55:01.415007 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:01.414958 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:55:01.415007 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:01.414971 2574 projected.go:194] Error preparing data for projected volume kube-api-access-k7gcb for pod openshift-network-diagnostics/network-check-target-88zs6: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:01.415103 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:01.415032 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35ee14f0-f248-4da4-a578-5901f2cd8f5f-kube-api-access-k7gcb podName:35ee14f0-f248-4da4-a578-5901f2cd8f5f nodeName:}" failed. No retries permitted until 2026-04-23 17:55:02.41501377 +0000 UTC m=+140.312467534 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-k7gcb" (UniqueName: "kubernetes.io/projected/35ee14f0-f248-4da4-a578-5901f2cd8f5f-kube-api-access-k7gcb") pod "network-check-target-88zs6" (UID: "35ee14f0-f248-4da4-a578-5901f2cd8f5f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:01.864045 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:01.864007 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5bjmz" event={"ID":"5645d713-95ce-41af-878d-48178971c03c","Type":"ContainerStarted","Data":"95192f79b273979f6d62436a2e465184630fa692c5a290200ddc425d653e6cd4"} Apr 23 17:55:01.875770 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:01.875729 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-fggsl" event={"ID":"70e049a1-02dc-4e15-94c0-0119bbff0af3","Type":"ContainerStarted","Data":"c92145e2b2d18b4921ae66cc98e193b674fdc57d9477bcc4cb6adeab8c4f9425"} Apr 23 17:55:01.877827 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:01.877771 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-pctgs" event={"ID":"424e3ffc-c16a-4133-9d02-752d3ff52059","Type":"ContainerStarted","Data":"888f7a0f3a06e157ebc25f52cd510df3838504e3f001bc2d9759d1e976e5a642"} Apr 23 17:55:01.879554 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:01.879500 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-z254x" event={"ID":"e6422dbb-93b9-4284-b862-8a2613b43681","Type":"ContainerStarted","Data":"081f299826b51cb16b81c6466890ff99450fa5488f9f6813c1b0daa041b9883c"} Apr 23 17:55:01.884933 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:01.884910 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xcs8j" event={"ID":"9e31c73e-77bf-4968-b370-f732e248be97","Type":"ContainerStarted","Data":"c869d0046c3afb0bcedd54309e27672cc1fb39184a6533e483dda0e09af953f1"} Apr 23 17:55:01.890066 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:01.890042 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4zn98" event={"ID":"84132246-7311-4103-a045-d865e6d62737","Type":"ContainerStarted","Data":"7f5541048fb7e0e3b818ad12bbf127a586bf851c07396c49497ed4c20ed04fc5"} Apr 23 17:55:01.893743 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:01.893719 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-cnp6f" event={"ID":"4fed6b9f-295e-4b13-8a53-cddd432bda46","Type":"ContainerStarted","Data":"3bb269a3fcd4c5922348c6cce0c8cb6fe6e651d9126b24a135d35479d8509563"} Apr 23 17:55:01.903677 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:01.903653 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-wx25k" event={"ID":"0dbc6169-c545-4cc9-a3ea-83c161f64108","Type":"ContainerStarted","Data":"004897bf5128f43a9a0b614dc49732ed903177c17d5b32a40497e5fed69aada3"} Apr 23 17:55:01.908634 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:01.908611 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" event={"ID":"11c76c2e-7e8d-4076-bf3e-40c9a12aad39","Type":"ContainerStarted","Data":"47b88e2f734311dcc6f20c35a66767a2f5858d9cff365bc95f941237742de89c"} Apr 23 17:55:02.321661 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:02.321627 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb-original-pull-secret\") pod \"global-pull-secret-syncer-p5ndb\" (UID: \"f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb\") " pod="kube-system/global-pull-secret-syncer-p5ndb" Apr 23 17:55:02.322113 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:02.321684 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5af1b6bf-71a6-4257-9a8a-b48c1c14659c-metrics-certs\") pod \"network-metrics-daemon-45ztw\" (UID: \"5af1b6bf-71a6-4257-9a8a-b48c1c14659c\") " pod="openshift-multus/network-metrics-daemon-45ztw" Apr 23 17:55:02.322113 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:02.321891 2574 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:02.322113 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:02.321956 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5af1b6bf-71a6-4257-9a8a-b48c1c14659c-metrics-certs podName:5af1b6bf-71a6-4257-9a8a-b48c1c14659c nodeName:}" failed. No retries permitted until 2026-04-23 17:55:04.321937352 +0000 UTC m=+142.219391106 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5af1b6bf-71a6-4257-9a8a-b48c1c14659c-metrics-certs") pod "network-metrics-daemon-45ztw" (UID: "5af1b6bf-71a6-4257-9a8a-b48c1c14659c") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:02.322387 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:02.322368 2574 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:55:02.322469 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:02.322434 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb-original-pull-secret podName:f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb nodeName:}" failed. No retries permitted until 2026-04-23 17:55:04.322409018 +0000 UTC m=+142.219862767 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb-original-pull-secret") pod "global-pull-secret-syncer-p5ndb" (UID: "f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:55:02.422238 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:02.422197 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k7gcb\" (UniqueName: \"kubernetes.io/projected/35ee14f0-f248-4da4-a578-5901f2cd8f5f-kube-api-access-k7gcb\") pod \"network-check-target-88zs6\" (UID: \"35ee14f0-f248-4da4-a578-5901f2cd8f5f\") " pod="openshift-network-diagnostics/network-check-target-88zs6" Apr 23 17:55:02.422399 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:02.422383 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:55:02.422466 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:02.422406 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:55:02.422466 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:02.422419 2574 projected.go:194] Error preparing data for projected volume kube-api-access-k7gcb for pod openshift-network-diagnostics/network-check-target-88zs6: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:02.422564 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:02.422473 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35ee14f0-f248-4da4-a578-5901f2cd8f5f-kube-api-access-k7gcb podName:35ee14f0-f248-4da4-a578-5901f2cd8f5f nodeName:}" failed. No retries permitted until 2026-04-23 17:55:04.42245543 +0000 UTC m=+142.319909183 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-k7gcb" (UniqueName: "kubernetes.io/projected/35ee14f0-f248-4da4-a578-5901f2cd8f5f-kube-api-access-k7gcb") pod "network-check-target-88zs6" (UID: "35ee14f0-f248-4da4-a578-5901f2cd8f5f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:02.605420 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:02.605317 2574 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:55:02.643772 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:02.643745 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45ztw" Apr 23 17:55:02.643921 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:02.643871 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45ztw" podUID="5af1b6bf-71a6-4257-9a8a-b48c1c14659c" Apr 23 17:55:02.643921 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:02.643900 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-88zs6" Apr 23 17:55:02.644038 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:02.644012 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-88zs6" podUID="35ee14f0-f248-4da4-a578-5901f2cd8f5f" Apr 23 17:55:02.644099 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:02.644051 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-p5ndb" Apr 23 17:55:02.644154 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:02.644126 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-p5ndb" podUID="f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb" Apr 23 17:55:04.338474 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:04.338418 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb-original-pull-secret\") pod \"global-pull-secret-syncer-p5ndb\" (UID: \"f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb\") " pod="kube-system/global-pull-secret-syncer-p5ndb" Apr 23 17:55:04.338992 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:04.338484 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5af1b6bf-71a6-4257-9a8a-b48c1c14659c-metrics-certs\") pod \"network-metrics-daemon-45ztw\" (UID: \"5af1b6bf-71a6-4257-9a8a-b48c1c14659c\") " pod="openshift-multus/network-metrics-daemon-45ztw" Apr 23 17:55:04.338992 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:04.338656 2574 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:04.338992 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:04.338719 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5af1b6bf-71a6-4257-9a8a-b48c1c14659c-metrics-certs podName:5af1b6bf-71a6-4257-9a8a-b48c1c14659c nodeName:}" failed. No retries permitted until 2026-04-23 17:55:08.338701093 +0000 UTC m=+146.236154849 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5af1b6bf-71a6-4257-9a8a-b48c1c14659c-metrics-certs") pod "network-metrics-daemon-45ztw" (UID: "5af1b6bf-71a6-4257-9a8a-b48c1c14659c") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:04.338992 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:04.338786 2574 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:55:04.338992 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:04.338855 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb-original-pull-secret podName:f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb nodeName:}" failed. No retries permitted until 2026-04-23 17:55:08.338837662 +0000 UTC m=+146.236291411 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb-original-pull-secret") pod "global-pull-secret-syncer-p5ndb" (UID: "f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:55:04.439096 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:04.439059 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k7gcb\" (UniqueName: \"kubernetes.io/projected/35ee14f0-f248-4da4-a578-5901f2cd8f5f-kube-api-access-k7gcb\") pod \"network-check-target-88zs6\" (UID: \"35ee14f0-f248-4da4-a578-5901f2cd8f5f\") " pod="openshift-network-diagnostics/network-check-target-88zs6" Apr 23 17:55:04.439278 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:04.439256 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:55:04.439363 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:04.439287 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:55:04.439363 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:04.439300 2574 projected.go:194] Error preparing data for projected volume kube-api-access-k7gcb for pod openshift-network-diagnostics/network-check-target-88zs6: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:04.439363 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:04.439360 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35ee14f0-f248-4da4-a578-5901f2cd8f5f-kube-api-access-k7gcb podName:35ee14f0-f248-4da4-a578-5901f2cd8f5f nodeName:}" failed. No retries permitted until 2026-04-23 17:55:08.439341934 +0000 UTC m=+146.336795690 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-k7gcb" (UniqueName: "kubernetes.io/projected/35ee14f0-f248-4da4-a578-5901f2cd8f5f-kube-api-access-k7gcb") pod "network-check-target-88zs6" (UID: "35ee14f0-f248-4da4-a578-5901f2cd8f5f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:04.641192 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:04.640686 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-p5ndb" Apr 23 17:55:04.641192 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:04.640824 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-p5ndb" podUID="f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb" Apr 23 17:55:04.641192 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:04.640686 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45ztw" Apr 23 17:55:04.641192 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:04.640956 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45ztw" podUID="5af1b6bf-71a6-4257-9a8a-b48c1c14659c" Apr 23 17:55:04.641192 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:04.641004 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-88zs6" Apr 23 17:55:04.641192 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:04.641079 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-88zs6" podUID="35ee14f0-f248-4da4-a578-5901f2cd8f5f" Apr 23 17:55:06.641592 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:06.641550 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45ztw" Apr 23 17:55:06.642162 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:06.641709 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45ztw" podUID="5af1b6bf-71a6-4257-9a8a-b48c1c14659c" Apr 23 17:55:06.642162 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:06.642135 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-p5ndb" Apr 23 17:55:06.642284 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:06.642233 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-p5ndb" podUID="f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb" Apr 23 17:55:06.642335 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:06.642303 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-88zs6" Apr 23 17:55:06.642390 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:06.642373 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-88zs6" podUID="35ee14f0-f248-4da4-a578-5901f2cd8f5f" Apr 23 17:55:07.606189 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:07.606146 2574 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:55:08.373168 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:08.373067 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb-original-pull-secret\") pod \"global-pull-secret-syncer-p5ndb\" (UID: \"f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb\") " pod="kube-system/global-pull-secret-syncer-p5ndb" Apr 23 17:55:08.373168 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:08.373130 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5af1b6bf-71a6-4257-9a8a-b48c1c14659c-metrics-certs\") pod \"network-metrics-daemon-45ztw\" (UID: \"5af1b6bf-71a6-4257-9a8a-b48c1c14659c\") " pod="openshift-multus/network-metrics-daemon-45ztw" Apr 23 17:55:08.373945 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:08.373279 2574 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:08.373945 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:08.373342 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5af1b6bf-71a6-4257-9a8a-b48c1c14659c-metrics-certs podName:5af1b6bf-71a6-4257-9a8a-b48c1c14659c nodeName:}" failed. No retries permitted until 2026-04-23 17:55:16.373324208 +0000 UTC m=+154.270777960 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5af1b6bf-71a6-4257-9a8a-b48c1c14659c-metrics-certs") pod "network-metrics-daemon-45ztw" (UID: "5af1b6bf-71a6-4257-9a8a-b48c1c14659c") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:08.373945 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:08.373696 2574 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:55:08.373945 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:08.373762 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb-original-pull-secret podName:f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb nodeName:}" failed. No retries permitted until 2026-04-23 17:55:16.373744938 +0000 UTC m=+154.271198692 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb-original-pull-secret") pod "global-pull-secret-syncer-p5ndb" (UID: "f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:55:08.474280 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:08.474172 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k7gcb\" (UniqueName: \"kubernetes.io/projected/35ee14f0-f248-4da4-a578-5901f2cd8f5f-kube-api-access-k7gcb\") pod \"network-check-target-88zs6\" (UID: \"35ee14f0-f248-4da4-a578-5901f2cd8f5f\") " pod="openshift-network-diagnostics/network-check-target-88zs6" Apr 23 17:55:08.474453 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:08.474353 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:55:08.474453 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:08.474380 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:55:08.474453 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:08.474393 2574 projected.go:194] Error preparing data for projected volume kube-api-access-k7gcb for pod openshift-network-diagnostics/network-check-target-88zs6: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:08.474453 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:08.474447 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35ee14f0-f248-4da4-a578-5901f2cd8f5f-kube-api-access-k7gcb podName:35ee14f0-f248-4da4-a578-5901f2cd8f5f nodeName:}" failed. No retries permitted until 2026-04-23 17:55:16.474429948 +0000 UTC m=+154.371883710 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-k7gcb" (UniqueName: "kubernetes.io/projected/35ee14f0-f248-4da4-a578-5901f2cd8f5f-kube-api-access-k7gcb") pod "network-check-target-88zs6" (UID: "35ee14f0-f248-4da4-a578-5901f2cd8f5f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:08.640759 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:08.640613 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45ztw" Apr 23 17:55:08.640759 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:08.640659 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-p5ndb" Apr 23 17:55:08.640951 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:08.640752 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45ztw" podUID="5af1b6bf-71a6-4257-9a8a-b48c1c14659c" Apr 23 17:55:08.641175 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:08.640613 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-88zs6" Apr 23 17:55:08.641175 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:08.641123 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-p5ndb" podUID="f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb" Apr 23 17:55:08.641175 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:08.641164 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-88zs6" podUID="35ee14f0-f248-4da4-a578-5901f2cd8f5f" Apr 23 17:55:10.641068 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:10.641033 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-p5ndb" Apr 23 17:55:10.641515 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:10.641078 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-88zs6" Apr 23 17:55:10.641515 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:10.641176 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-p5ndb" podUID="f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb" Apr 23 17:55:10.641515 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:10.641239 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45ztw" Apr 23 17:55:10.641515 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:10.641233 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-88zs6" podUID="35ee14f0-f248-4da4-a578-5901f2cd8f5f" Apr 23 17:55:10.641515 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:10.641340 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45ztw" podUID="5af1b6bf-71a6-4257-9a8a-b48c1c14659c" Apr 23 17:55:12.606607 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:12.606538 2574 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:55:12.642005 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:12.641976 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45ztw" Apr 23 17:55:12.642158 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:12.642085 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45ztw" podUID="5af1b6bf-71a6-4257-9a8a-b48c1c14659c" Apr 23 17:55:12.642158 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:12.642149 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-p5ndb" Apr 23 17:55:12.642269 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:12.642232 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-p5ndb" podUID="f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb" Apr 23 17:55:12.642269 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:12.642246 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-88zs6" Apr 23 17:55:12.642379 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:12.642354 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-88zs6" podUID="35ee14f0-f248-4da4-a578-5901f2cd8f5f" Apr 23 17:55:14.641129 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:14.641089 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-88zs6" Apr 23 17:55:14.641521 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:14.641089 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-p5ndb" Apr 23 17:55:14.641521 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:14.641233 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-88zs6" podUID="35ee14f0-f248-4da4-a578-5901f2cd8f5f" Apr 23 17:55:14.641521 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:14.641092 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45ztw" Apr 23 17:55:14.641521 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:14.641322 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-p5ndb" podUID="f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb" Apr 23 17:55:14.641521 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:14.641431 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45ztw" podUID="5af1b6bf-71a6-4257-9a8a-b48c1c14659c" Apr 23 17:55:16.438046 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:16.437995 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb-original-pull-secret\") pod \"global-pull-secret-syncer-p5ndb\" (UID: \"f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb\") " pod="kube-system/global-pull-secret-syncer-p5ndb" Apr 23 17:55:16.438493 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:16.438064 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5af1b6bf-71a6-4257-9a8a-b48c1c14659c-metrics-certs\") pod \"network-metrics-daemon-45ztw\" (UID: \"5af1b6bf-71a6-4257-9a8a-b48c1c14659c\") " pod="openshift-multus/network-metrics-daemon-45ztw" Apr 23 17:55:16.438493 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:16.438132 2574 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:55:16.438493 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:16.438171 2574 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:16.438493 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:16.438212 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb-original-pull-secret podName:f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb nodeName:}" failed. No retries permitted until 2026-04-23 17:55:32.438193818 +0000 UTC m=+170.335647573 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb-original-pull-secret") pod "global-pull-secret-syncer-p5ndb" (UID: "f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:55:16.438493 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:16.438231 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5af1b6bf-71a6-4257-9a8a-b48c1c14659c-metrics-certs podName:5af1b6bf-71a6-4257-9a8a-b48c1c14659c nodeName:}" failed. No retries permitted until 2026-04-23 17:55:32.438222244 +0000 UTC m=+170.335675991 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5af1b6bf-71a6-4257-9a8a-b48c1c14659c-metrics-certs") pod "network-metrics-daemon-45ztw" (UID: "5af1b6bf-71a6-4257-9a8a-b48c1c14659c") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:16.538796 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:16.538764 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k7gcb\" (UniqueName: \"kubernetes.io/projected/35ee14f0-f248-4da4-a578-5901f2cd8f5f-kube-api-access-k7gcb\") pod \"network-check-target-88zs6\" (UID: \"35ee14f0-f248-4da4-a578-5901f2cd8f5f\") " pod="openshift-network-diagnostics/network-check-target-88zs6" Apr 23 17:55:16.538964 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:16.538928 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:55:16.538964 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:16.538948 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:55:16.538964 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:16.538958 2574 projected.go:194] Error preparing data for projected volume kube-api-access-k7gcb for pod openshift-network-diagnostics/network-check-target-88zs6: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:16.539121 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:16.539023 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35ee14f0-f248-4da4-a578-5901f2cd8f5f-kube-api-access-k7gcb podName:35ee14f0-f248-4da4-a578-5901f2cd8f5f nodeName:}" failed. No retries permitted until 2026-04-23 17:55:32.53899564 +0000 UTC m=+170.436449403 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-k7gcb" (UniqueName: "kubernetes.io/projected/35ee14f0-f248-4da4-a578-5901f2cd8f5f-kube-api-access-k7gcb") pod "network-check-target-88zs6" (UID: "35ee14f0-f248-4da4-a578-5901f2cd8f5f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:16.641588 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:16.641542 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-p5ndb" Apr 23 17:55:16.641738 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:16.641682 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45ztw" Apr 23 17:55:16.641738 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:16.641706 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-88zs6" Apr 23 17:55:16.641850 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:16.641794 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45ztw" podUID="5af1b6bf-71a6-4257-9a8a-b48c1c14659c" Apr 23 17:55:16.641850 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:16.641684 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-p5ndb" podUID="f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb" Apr 23 17:55:16.641942 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:16.641860 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-88zs6" podUID="35ee14f0-f248-4da4-a578-5901f2cd8f5f" Apr 23 17:55:17.607507 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:17.607466 2574 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:55:17.953365 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:17.951858 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" event={"ID":"11c76c2e-7e8d-4076-bf3e-40c9a12aad39","Type":"ContainerStarted","Data":"5afb7563f79bf00b49ca0d73ddbe752f9c461e7783806cf6af26ac2ab7efdc00"} Apr 23 17:55:17.955642 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:17.955327 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5bjmz" event={"ID":"5645d713-95ce-41af-878d-48178971c03c","Type":"ContainerStarted","Data":"e2569d709a30896bafcade9591ade3cd7f6f80c1afbe1af6548db0ce9bfb1216"} Apr 23 17:55:17.957429 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:17.957405 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-fggsl" event={"ID":"70e049a1-02dc-4e15-94c0-0119bbff0af3","Type":"ContainerStarted","Data":"5b99a15e787ab64580ae1fb3a62e02498757976947818596a7591024196fad05"} Apr 23 17:55:17.985199 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:17.985153 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-5bjmz" podStartSLOduration=55.180668184 podStartE2EDuration="1m11.985139659s" podCreationTimestamp="2026-04-23 17:54:06 +0000 UTC" firstStartedPulling="2026-04-23 17:55:00.923526471 +0000 UTC m=+138.820980229" lastFinishedPulling="2026-04-23 17:55:17.727997945 +0000 UTC m=+155.625451704" observedRunningTime="2026-04-23 17:55:17.984864506 +0000 UTC m=+155.882318276" watchObservedRunningTime="2026-04-23 17:55:17.985139659 +0000 UTC m=+155.882593428" Apr 23 17:55:18.006342 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:18.006307 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-fggsl" podStartSLOduration=55.207292638 podStartE2EDuration="1m12.006294571s" podCreationTimestamp="2026-04-23 17:54:06 +0000 UTC" firstStartedPulling="2026-04-23 17:55:00.914424516 +0000 UTC m=+138.811878264" lastFinishedPulling="2026-04-23 17:55:17.713426445 +0000 UTC m=+155.610880197" observedRunningTime="2026-04-23 17:55:18.006120374 +0000 UTC m=+155.903574155" watchObservedRunningTime="2026-04-23 17:55:18.006294571 +0000 UTC m=+155.903748344" Apr 23 17:55:18.641368 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:18.641286 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-p5ndb" Apr 23 17:55:18.641957 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:18.641430 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-p5ndb" podUID="f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb" Apr 23 17:55:18.641957 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:18.641436 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45ztw" Apr 23 17:55:18.641957 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:18.641459 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-88zs6" Apr 23 17:55:18.641957 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:18.641556 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45ztw" podUID="5af1b6bf-71a6-4257-9a8a-b48c1c14659c" Apr 23 17:55:18.641957 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:18.641672 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-88zs6" podUID="35ee14f0-f248-4da4-a578-5901f2cd8f5f" Apr 23 17:55:18.877501 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:18.877478 2574 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock" Apr 23 17:55:18.962025 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:18.962000 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-pctgs" event={"ID":"424e3ffc-c16a-4133-9d02-752d3ff52059","Type":"ContainerStarted","Data":"9e6f78421912960826a047155d9a8bc51ae25cd85c406bf0a542a6c705562796"} Apr 23 17:55:18.962123 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:18.962034 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-pctgs" event={"ID":"424e3ffc-c16a-4133-9d02-752d3ff52059","Type":"ContainerStarted","Data":"6dee5a2a9ec7885321ccde073b33fe6f94186d353c18606f586cb5abdd2b6229"} Apr 23 17:55:18.963109 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:18.963087 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-z254x" event={"ID":"e6422dbb-93b9-4284-b862-8a2613b43681","Type":"ContainerStarted","Data":"186e6eea99803074caf395a209d7a6ac68645482dcddb1c91ae54b83bea3c558"} Apr 23 17:55:18.964248 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:18.964223 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xcs8j" event={"ID":"9e31c73e-77bf-4968-b370-f732e248be97","Type":"ContainerStarted","Data":"910649e20f2fe928553ab8cf4bba9fd9e2494efe0dd519cd80ec095930e4fe8f"} Apr 23 17:55:18.965442 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:18.965418 2574 generic.go:358] "Generic (PLEG): container finished" podID="84132246-7311-4103-a045-d865e6d62737" containerID="07d1b778856e892ebcaf4f9d2b499c77e997219bb5a3e53f6a5e4ac9bd7681f4" exitCode=0 Apr 23 17:55:18.965530 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:18.965490 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4zn98" event={"ID":"84132246-7311-4103-a045-d865e6d62737","Type":"ContainerDied","Data":"07d1b778856e892ebcaf4f9d2b499c77e997219bb5a3e53f6a5e4ac9bd7681f4"} Apr 23 17:55:18.966686 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:18.966663 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-cnp6f" event={"ID":"4fed6b9f-295e-4b13-8a53-cddd432bda46","Type":"ContainerStarted","Data":"ec469439eb7b44b3f73d9cd334d460fd15aa5c83083f4be5122a80c46e2f1fad"} Apr 23 17:55:18.967961 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:18.967935 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-wx25k" event={"ID":"0dbc6169-c545-4cc9-a3ea-83c161f64108","Type":"ContainerStarted","Data":"30195e93156c894c821408bbb30f5905f7a84315ef4747f85aa4acf5c9bd3fc4"} Apr 23 17:55:18.970205 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:18.970189 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/ovn-acl-logging/0.log" Apr 23 17:55:18.970462 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:18.970447 2574 generic.go:358] "Generic (PLEG): container finished" podID="11c76c2e-7e8d-4076-bf3e-40c9a12aad39" containerID="b4783e7c841d517a6322db4ee850f2dfbff9c10851161c05ea8b7155d68d73a1" exitCode=1 Apr 23 17:55:18.970529 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:18.970510 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" event={"ID":"11c76c2e-7e8d-4076-bf3e-40c9a12aad39","Type":"ContainerStarted","Data":"07c53e9b2cde5697ecdb928da682a87ac0d702c4f5353e9a33d11596999f0dc8"} Apr 23 17:55:18.970569 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:18.970536 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" event={"ID":"11c76c2e-7e8d-4076-bf3e-40c9a12aad39","Type":"ContainerStarted","Data":"5a400281bd20a2be136bc483b18902d8d8d4fff911f6b7cd27bb5ff42c18a61c"} Apr 23 17:55:18.970569 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:18.970550 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" event={"ID":"11c76c2e-7e8d-4076-bf3e-40c9a12aad39","Type":"ContainerStarted","Data":"58c2a0938cf0e6b24e751b642cb494a17a5b2c3497768132d3e48f7fe60f6d25"} Apr 23 17:55:18.970569 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:18.970562 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" event={"ID":"11c76c2e-7e8d-4076-bf3e-40c9a12aad39","Type":"ContainerStarted","Data":"811b69dc484f6cacf7ae03576509d043ca8ae51b6f5575564564a715d56393ca"} Apr 23 17:55:18.970685 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:18.970592 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" event={"ID":"11c76c2e-7e8d-4076-bf3e-40c9a12aad39","Type":"ContainerDied","Data":"b4783e7c841d517a6322db4ee850f2dfbff9c10851161c05ea8b7155d68d73a1"} Apr 23 17:55:18.986822 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:18.986782 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/iptables-alerter-z254x" podStartSLOduration=57.194815255 podStartE2EDuration="1m13.9867723s" podCreationTimestamp="2026-04-23 17:54:05 +0000 UTC" firstStartedPulling="2026-04-23 17:55:00.898037109 +0000 UTC m=+138.795490861" lastFinishedPulling="2026-04-23 17:55:17.689994145 +0000 UTC m=+155.587447906" observedRunningTime="2026-04-23 17:55:18.986513698 +0000 UTC m=+156.883967480" watchObservedRunningTime="2026-04-23 17:55:18.9867723 +0000 UTC m=+156.884226070" Apr 23 17:55:19.068602 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:19.068551 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-cnp6f" podStartSLOduration=56.272594345 podStartE2EDuration="1m13.068539662s" podCreationTimestamp="2026-04-23 17:54:06 +0000 UTC" firstStartedPulling="2026-04-23 17:55:00.918440178 +0000 UTC m=+138.815893927" lastFinishedPulling="2026-04-23 17:55:17.714385496 +0000 UTC m=+155.611839244" observedRunningTime="2026-04-23 17:55:19.047527728 +0000 UTC m=+156.944981498" watchObservedRunningTime="2026-04-23 17:55:19.068539662 +0000 UTC m=+156.965993431" Apr 23 17:55:19.068952 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:19.068927 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-xcs8j" podStartSLOduration=60.969060124 podStartE2EDuration="1m13.068919679s" podCreationTimestamp="2026-04-23 17:54:06 +0000 UTC" firstStartedPulling="2026-04-23 17:55:00.934093654 +0000 UTC m=+138.831547402" lastFinishedPulling="2026-04-23 17:55:13.033953199 +0000 UTC m=+150.931406957" observedRunningTime="2026-04-23 17:55:19.068673283 +0000 UTC m=+156.966127040" watchObservedRunningTime="2026-04-23 17:55:19.068919679 +0000 UTC m=+156.966373448" Apr 23 17:55:19.092510 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:19.092468 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/konnectivity-agent-wx25k" podStartSLOduration=56.290060092 podStartE2EDuration="1m13.092459123s" podCreationTimestamp="2026-04-23 17:54:06 +0000 UTC" firstStartedPulling="2026-04-23 17:55:00.909354923 +0000 UTC m=+138.806808671" lastFinishedPulling="2026-04-23 17:55:17.711753954 +0000 UTC m=+155.609207702" observedRunningTime="2026-04-23 17:55:19.092183961 +0000 UTC m=+156.989637731" watchObservedRunningTime="2026-04-23 17:55:19.092459123 +0000 UTC m=+156.989912908" Apr 23 17:55:19.665758 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:19.665645 2574 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock","Timestamp":"2026-04-23T17:55:18.877497119Z","UUID":"cf598c15-2464-4570-91a8-905ebfdd2427","Handler":null,"Name":"","Endpoint":""} Apr 23 17:55:19.669049 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:19.669014 2574 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: ebs.csi.aws.com endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock versions: 1.0.0 Apr 23 17:55:19.669165 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:19.669058 2574 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: ebs.csi.aws.com at endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock Apr 23 17:55:19.975165 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:19.975110 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-pctgs" event={"ID":"424e3ffc-c16a-4133-9d02-752d3ff52059","Type":"ContainerStarted","Data":"96727e391bf762e8554c2e693eeb3ff84f995f1acddab273b181929900d7c27e"} Apr 23 17:55:20.000014 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:19.999968 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-pctgs" podStartSLOduration=56.181658845 podStartE2EDuration="1m14.99995421s" podCreationTimestamp="2026-04-23 17:54:05 +0000 UTC" firstStartedPulling="2026-04-23 17:55:00.903632943 +0000 UTC m=+138.801086691" lastFinishedPulling="2026-04-23 17:55:19.721928298 +0000 UTC m=+157.619382056" observedRunningTime="2026-04-23 17:55:19.998823811 +0000 UTC m=+157.896277592" watchObservedRunningTime="2026-04-23 17:55:19.99995421 +0000 UTC m=+157.897407979" Apr 23 17:55:20.641506 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:20.641272 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45ztw" Apr 23 17:55:20.641699 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:20.641271 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-88zs6" Apr 23 17:55:20.641699 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:20.641632 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45ztw" podUID="5af1b6bf-71a6-4257-9a8a-b48c1c14659c" Apr 23 17:55:20.641812 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:20.641703 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-88zs6" podUID="35ee14f0-f248-4da4-a578-5901f2cd8f5f" Apr 23 17:55:20.641812 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:20.641294 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-p5ndb" Apr 23 17:55:20.641812 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:20.641803 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-p5ndb" podUID="f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb" Apr 23 17:55:20.900877 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:20.900846 2574 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/konnectivity-agent-wx25k" Apr 23 17:55:20.901415 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:20.901400 2574 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/konnectivity-agent-wx25k" Apr 23 17:55:20.979858 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:20.979832 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/ovn-acl-logging/0.log" Apr 23 17:55:20.980198 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:20.980163 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" event={"ID":"11c76c2e-7e8d-4076-bf3e-40c9a12aad39","Type":"ContainerStarted","Data":"e3b334ebac04f917198fc68a1ddd3bf544e5a26bc18173dd726f7e334f72341a"} Apr 23 17:55:20.980699 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:20.980674 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kube-system/konnectivity-agent-wx25k" Apr 23 17:55:20.980975 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:20.980959 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/konnectivity-agent-wx25k" Apr 23 17:55:22.608068 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:22.608037 2574 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:55:22.642559 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:22.642534 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-88zs6" Apr 23 17:55:22.642710 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:22.642650 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45ztw" Apr 23 17:55:22.642710 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:22.642680 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-88zs6" podUID="35ee14f0-f248-4da4-a578-5901f2cd8f5f" Apr 23 17:55:22.642811 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:22.642736 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45ztw" podUID="5af1b6bf-71a6-4257-9a8a-b48c1c14659c" Apr 23 17:55:22.642811 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:22.642766 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-p5ndb" Apr 23 17:55:22.642879 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:22.642835 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-p5ndb" podUID="f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb" Apr 23 17:55:22.987303 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:22.987281 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/ovn-acl-logging/0.log" Apr 23 17:55:22.987569 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:22.987547 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" event={"ID":"11c76c2e-7e8d-4076-bf3e-40c9a12aad39","Type":"ContainerStarted","Data":"8682cc56a783aa5e00d3ebc237abfb24affc8d63c0f4ee10a8b19f078ab48746"} Apr 23 17:55:22.987860 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:22.987842 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:22.988004 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:22.987986 2574 scope.go:117] "RemoveContainer" containerID="b4783e7c841d517a6322db4ee850f2dfbff9c10851161c05ea8b7155d68d73a1" Apr 23 17:55:23.002100 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:23.002082 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:23.990959 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:23.990785 2574 generic.go:358] "Generic (PLEG): container finished" podID="84132246-7311-4103-a045-d865e6d62737" containerID="48d64a1ba0d456ceb9f883a8b1ce6077c4490437360a14940cdadb818bd9d3db" exitCode=0 Apr 23 17:55:23.991561 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:23.990866 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4zn98" event={"ID":"84132246-7311-4103-a045-d865e6d62737","Type":"ContainerDied","Data":"48d64a1ba0d456ceb9f883a8b1ce6077c4490437360a14940cdadb818bd9d3db"} Apr 23 17:55:23.994671 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:23.994654 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/ovn-acl-logging/0.log" Apr 23 17:55:23.995039 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:23.994997 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" event={"ID":"11c76c2e-7e8d-4076-bf3e-40c9a12aad39","Type":"ContainerStarted","Data":"6fef1e4ba7169e69575279480e9ca80bc7847b2f818c8389d60b38412cc13909"} Apr 23 17:55:23.995304 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:23.995288 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:23.995359 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:23.995315 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:24.008828 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:24.008808 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:24.066786 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:24.066683 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" podStartSLOduration=62.176569121 podStartE2EDuration="1m19.066667266s" podCreationTimestamp="2026-04-23 17:54:05 +0000 UTC" firstStartedPulling="2026-04-23 17:55:00.893719708 +0000 UTC m=+138.791173456" lastFinishedPulling="2026-04-23 17:55:17.783817841 +0000 UTC m=+155.681271601" observedRunningTime="2026-04-23 17:55:24.065121935 +0000 UTC m=+161.962575733" watchObservedRunningTime="2026-04-23 17:55:24.066667266 +0000 UTC m=+161.964121036" Apr 23 17:55:24.641132 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:24.641101 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45ztw" Apr 23 17:55:24.641368 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:24.641106 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-88zs6" Apr 23 17:55:24.641368 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:24.641216 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45ztw" podUID="5af1b6bf-71a6-4257-9a8a-b48c1c14659c" Apr 23 17:55:24.641368 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:24.641224 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-p5ndb" Apr 23 17:55:24.641368 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:24.641322 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-88zs6" podUID="35ee14f0-f248-4da4-a578-5901f2cd8f5f" Apr 23 17:55:24.641607 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:24.641421 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-p5ndb" podUID="f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb" Apr 23 17:55:25.049937 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:25.049912 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-p5ndb"] Apr 23 17:55:25.050421 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:25.050020 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-p5ndb" Apr 23 17:55:25.050421 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:25.050133 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-p5ndb" podUID="f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb" Apr 23 17:55:25.053380 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:25.053358 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-45ztw"] Apr 23 17:55:25.053479 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:25.053469 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45ztw" Apr 23 17:55:25.053633 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:25.053609 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45ztw" podUID="5af1b6bf-71a6-4257-9a8a-b48c1c14659c" Apr 23 17:55:25.054036 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:25.054006 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-88zs6"] Apr 23 17:55:25.054130 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:25.054084 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-88zs6" Apr 23 17:55:25.054192 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:25.054171 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-88zs6" podUID="35ee14f0-f248-4da4-a578-5901f2cd8f5f" Apr 23 17:55:26.001066 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:26.001029 2574 generic.go:358] "Generic (PLEG): container finished" podID="84132246-7311-4103-a045-d865e6d62737" containerID="ea7aad5ee06ccdc0116b3b3f389d58f474bb31468468de3d530c4100b36c5e81" exitCode=0 Apr 23 17:55:26.001266 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:26.001110 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4zn98" event={"ID":"84132246-7311-4103-a045-d865e6d62737","Type":"ContainerDied","Data":"ea7aad5ee06ccdc0116b3b3f389d58f474bb31468468de3d530c4100b36c5e81"} Apr 23 17:55:26.641498 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:26.641331 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-88zs6" Apr 23 17:55:26.641819 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:26.641352 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45ztw" Apr 23 17:55:26.641819 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:26.641603 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-88zs6" podUID="35ee14f0-f248-4da4-a578-5901f2cd8f5f" Apr 23 17:55:26.641819 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:26.641418 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-p5ndb" Apr 23 17:55:26.641819 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:26.641703 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45ztw" podUID="5af1b6bf-71a6-4257-9a8a-b48c1c14659c" Apr 23 17:55:26.641819 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:26.641757 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-p5ndb" podUID="f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb" Apr 23 17:55:27.609260 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:27.609223 2574 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:55:28.007089 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:28.007058 2574 generic.go:358] "Generic (PLEG): container finished" podID="84132246-7311-4103-a045-d865e6d62737" containerID="50783aa39d8b813e359bff9878967abd393e8623dad4f6b8c23aca4f47255a71" exitCode=0 Apr 23 17:55:28.007436 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:28.007097 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4zn98" event={"ID":"84132246-7311-4103-a045-d865e6d62737","Type":"ContainerDied","Data":"50783aa39d8b813e359bff9878967abd393e8623dad4f6b8c23aca4f47255a71"} Apr 23 17:55:28.641320 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:28.641279 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-88zs6" Apr 23 17:55:28.641320 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:28.641308 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45ztw" Apr 23 17:55:28.641567 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:28.641279 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-p5ndb" Apr 23 17:55:28.641567 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:28.641408 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-88zs6" podUID="35ee14f0-f248-4da4-a578-5901f2cd8f5f" Apr 23 17:55:28.641567 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:28.641470 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-p5ndb" podUID="f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb" Apr 23 17:55:28.641567 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:28.641545 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45ztw" podUID="5af1b6bf-71a6-4257-9a8a-b48c1c14659c" Apr 23 17:55:30.640679 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:30.640646 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45ztw" Apr 23 17:55:30.640679 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:30.640669 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-88zs6" Apr 23 17:55:30.641249 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:30.640674 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-p5ndb" Apr 23 17:55:30.641249 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:30.640782 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45ztw" podUID="5af1b6bf-71a6-4257-9a8a-b48c1c14659c" Apr 23 17:55:30.641249 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:30.640903 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-88zs6" podUID="35ee14f0-f248-4da4-a578-5901f2cd8f5f" Apr 23 17:55:30.641249 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:30.641027 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-p5ndb" podUID="f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb" Apr 23 17:55:32.456073 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:32.456038 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb-original-pull-secret\") pod \"global-pull-secret-syncer-p5ndb\" (UID: \"f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb\") " pod="kube-system/global-pull-secret-syncer-p5ndb" Apr 23 17:55:32.456073 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:32.456084 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5af1b6bf-71a6-4257-9a8a-b48c1c14659c-metrics-certs\") pod \"network-metrics-daemon-45ztw\" (UID: \"5af1b6bf-71a6-4257-9a8a-b48c1c14659c\") " pod="openshift-multus/network-metrics-daemon-45ztw" Apr 23 17:55:32.456621 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:32.456193 2574 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:55:32.456621 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:32.456250 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb-original-pull-secret podName:f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb nodeName:}" failed. No retries permitted until 2026-04-23 17:56:04.456237491 +0000 UTC m=+202.353691238 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb-original-pull-secret") pod "global-pull-secret-syncer-p5ndb" (UID: "f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:55:32.456621 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:32.456193 2574 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:32.456621 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:32.456319 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5af1b6bf-71a6-4257-9a8a-b48c1c14659c-metrics-certs podName:5af1b6bf-71a6-4257-9a8a-b48c1c14659c nodeName:}" failed. No retries permitted until 2026-04-23 17:56:04.456307275 +0000 UTC m=+202.353761032 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5af1b6bf-71a6-4257-9a8a-b48c1c14659c-metrics-certs") pod "network-metrics-daemon-45ztw" (UID: "5af1b6bf-71a6-4257-9a8a-b48c1c14659c") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:32.556748 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:32.556715 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k7gcb\" (UniqueName: \"kubernetes.io/projected/35ee14f0-f248-4da4-a578-5901f2cd8f5f-kube-api-access-k7gcb\") pod \"network-check-target-88zs6\" (UID: \"35ee14f0-f248-4da4-a578-5901f2cd8f5f\") " pod="openshift-network-diagnostics/network-check-target-88zs6" Apr 23 17:55:32.556936 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:32.556904 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:55:32.556936 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:32.556925 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:55:32.556936 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:32.556938 2574 projected.go:194] Error preparing data for projected volume kube-api-access-k7gcb for pod openshift-network-diagnostics/network-check-target-88zs6: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:32.557129 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:32.557003 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35ee14f0-f248-4da4-a578-5901f2cd8f5f-kube-api-access-k7gcb podName:35ee14f0-f248-4da4-a578-5901f2cd8f5f nodeName:}" failed. No retries permitted until 2026-04-23 17:56:04.556982352 +0000 UTC m=+202.454436104 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-k7gcb" (UniqueName: "kubernetes.io/projected/35ee14f0-f248-4da4-a578-5901f2cd8f5f-kube-api-access-k7gcb") pod "network-check-target-88zs6" (UID: "35ee14f0-f248-4da4-a578-5901f2cd8f5f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:32.642289 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:32.642253 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45ztw" Apr 23 17:55:32.642458 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:32.642371 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-p5ndb" Apr 23 17:55:32.642773 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:32.642748 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-88zs6" Apr 23 17:55:32.646436 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:32.646416 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 23 17:55:32.646560 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:32.646458 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-5487j\"" Apr 23 17:55:32.646657 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:32.646625 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-gszvz\"" Apr 23 17:55:32.646657 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:32.646661 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 23 17:55:32.646832 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:32.646721 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"original-pull-secret\"" Apr 23 17:55:32.646832 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:32.646774 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 23 17:55:33.469957 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:33.469925 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-9s9cp"] Apr 23 17:55:33.488104 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:33.488081 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-9s9cp" Apr 23 17:55:33.491549 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:33.491525 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-kube-rbac-proxy-config\"" Apr 23 17:55:33.491679 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:33.491528 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"openshift-service-ca.crt\"" Apr 23 17:55:33.491872 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:33.491852 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kube-root-ca.crt\"" Apr 23 17:55:33.492776 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:33.492725 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"node-exporter-accelerators-collector-config\"" Apr 23 17:55:33.492776 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:33.492737 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"metrics-client-ca\"" Apr 23 17:55:33.492776 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:33.492740 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-tls\"" Apr 23 17:55:33.493106 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:33.493076 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-dockercfg-rzdrn\"" Apr 23 17:55:33.564180 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:33.564141 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c83e50d5-4354-484c-97ef-786bd15344a0-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-9s9cp\" (UID: \"c83e50d5-4354-484c-97ef-786bd15344a0\") " pod="openshift-monitoring/node-exporter-9s9cp" Apr 23 17:55:33.564180 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:33.564183 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/c83e50d5-4354-484c-97ef-786bd15344a0-node-exporter-wtmp\") pod \"node-exporter-9s9cp\" (UID: \"c83e50d5-4354-484c-97ef-786bd15344a0\") " pod="openshift-monitoring/node-exporter-9s9cp" Apr 23 17:55:33.564358 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:33.564290 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c83e50d5-4354-484c-97ef-786bd15344a0-sys\") pod \"node-exporter-9s9cp\" (UID: \"c83e50d5-4354-484c-97ef-786bd15344a0\") " pod="openshift-monitoring/node-exporter-9s9cp" Apr 23 17:55:33.564424 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:33.564366 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c83e50d5-4354-484c-97ef-786bd15344a0-metrics-client-ca\") pod \"node-exporter-9s9cp\" (UID: \"c83e50d5-4354-484c-97ef-786bd15344a0\") " pod="openshift-monitoring/node-exporter-9s9cp" Apr 23 17:55:33.564424 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:33.564408 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wg55\" (UniqueName: \"kubernetes.io/projected/c83e50d5-4354-484c-97ef-786bd15344a0-kube-api-access-6wg55\") pod \"node-exporter-9s9cp\" (UID: \"c83e50d5-4354-484c-97ef-786bd15344a0\") " pod="openshift-monitoring/node-exporter-9s9cp" Apr 23 17:55:33.564525 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:33.564463 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/c83e50d5-4354-484c-97ef-786bd15344a0-root\") pod \"node-exporter-9s9cp\" (UID: \"c83e50d5-4354-484c-97ef-786bd15344a0\") " pod="openshift-monitoring/node-exporter-9s9cp" Apr 23 17:55:33.564525 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:33.564490 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/c83e50d5-4354-484c-97ef-786bd15344a0-node-exporter-accelerators-collector-config\") pod \"node-exporter-9s9cp\" (UID: \"c83e50d5-4354-484c-97ef-786bd15344a0\") " pod="openshift-monitoring/node-exporter-9s9cp" Apr 23 17:55:33.564648 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:33.564525 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/c83e50d5-4354-484c-97ef-786bd15344a0-node-exporter-textfile\") pod \"node-exporter-9s9cp\" (UID: \"c83e50d5-4354-484c-97ef-786bd15344a0\") " pod="openshift-monitoring/node-exporter-9s9cp" Apr 23 17:55:33.564648 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:33.564552 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/c83e50d5-4354-484c-97ef-786bd15344a0-node-exporter-tls\") pod \"node-exporter-9s9cp\" (UID: \"c83e50d5-4354-484c-97ef-786bd15344a0\") " pod="openshift-monitoring/node-exporter-9s9cp" Apr 23 17:55:33.665230 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:33.665202 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c83e50d5-4354-484c-97ef-786bd15344a0-sys\") pod \"node-exporter-9s9cp\" (UID: \"c83e50d5-4354-484c-97ef-786bd15344a0\") " pod="openshift-monitoring/node-exporter-9s9cp" Apr 23 17:55:33.665348 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:33.665246 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c83e50d5-4354-484c-97ef-786bd15344a0-metrics-client-ca\") pod \"node-exporter-9s9cp\" (UID: \"c83e50d5-4354-484c-97ef-786bd15344a0\") " pod="openshift-monitoring/node-exporter-9s9cp" Apr 23 17:55:33.665348 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:33.665301 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c83e50d5-4354-484c-97ef-786bd15344a0-sys\") pod \"node-exporter-9s9cp\" (UID: \"c83e50d5-4354-484c-97ef-786bd15344a0\") " pod="openshift-monitoring/node-exporter-9s9cp" Apr 23 17:55:33.665417 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:33.665345 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6wg55\" (UniqueName: \"kubernetes.io/projected/c83e50d5-4354-484c-97ef-786bd15344a0-kube-api-access-6wg55\") pod \"node-exporter-9s9cp\" (UID: \"c83e50d5-4354-484c-97ef-786bd15344a0\") " pod="openshift-monitoring/node-exporter-9s9cp" Apr 23 17:55:33.665417 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:33.665389 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/c83e50d5-4354-484c-97ef-786bd15344a0-root\") pod \"node-exporter-9s9cp\" (UID: \"c83e50d5-4354-484c-97ef-786bd15344a0\") " pod="openshift-monitoring/node-exporter-9s9cp" Apr 23 17:55:33.665517 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:33.665414 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/c83e50d5-4354-484c-97ef-786bd15344a0-node-exporter-accelerators-collector-config\") pod \"node-exporter-9s9cp\" (UID: \"c83e50d5-4354-484c-97ef-786bd15344a0\") " pod="openshift-monitoring/node-exporter-9s9cp" Apr 23 17:55:33.665517 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:33.665451 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/c83e50d5-4354-484c-97ef-786bd15344a0-node-exporter-textfile\") pod \"node-exporter-9s9cp\" (UID: \"c83e50d5-4354-484c-97ef-786bd15344a0\") " pod="openshift-monitoring/node-exporter-9s9cp" Apr 23 17:55:33.665517 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:33.665480 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/c83e50d5-4354-484c-97ef-786bd15344a0-node-exporter-tls\") pod \"node-exporter-9s9cp\" (UID: \"c83e50d5-4354-484c-97ef-786bd15344a0\") " pod="openshift-monitoring/node-exporter-9s9cp" Apr 23 17:55:33.665517 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:33.665508 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c83e50d5-4354-484c-97ef-786bd15344a0-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-9s9cp\" (UID: \"c83e50d5-4354-484c-97ef-786bd15344a0\") " pod="openshift-monitoring/node-exporter-9s9cp" Apr 23 17:55:33.665785 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:33.665535 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/c83e50d5-4354-484c-97ef-786bd15344a0-node-exporter-wtmp\") pod \"node-exporter-9s9cp\" (UID: \"c83e50d5-4354-484c-97ef-786bd15344a0\") " pod="openshift-monitoring/node-exporter-9s9cp" Apr 23 17:55:33.665785 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:33.665584 2574 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Apr 23 17:55:33.665785 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:33.665652 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c83e50d5-4354-484c-97ef-786bd15344a0-node-exporter-tls podName:c83e50d5-4354-484c-97ef-786bd15344a0 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:34.165629907 +0000 UTC m=+172.063083660 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/c83e50d5-4354-484c-97ef-786bd15344a0-node-exporter-tls") pod "node-exporter-9s9cp" (UID: "c83e50d5-4354-484c-97ef-786bd15344a0") : secret "node-exporter-tls" not found Apr 23 17:55:33.665785 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:33.665753 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/c83e50d5-4354-484c-97ef-786bd15344a0-node-exporter-wtmp\") pod \"node-exporter-9s9cp\" (UID: \"c83e50d5-4354-484c-97ef-786bd15344a0\") " pod="openshift-monitoring/node-exporter-9s9cp" Apr 23 17:55:33.665785 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:33.665476 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/c83e50d5-4354-484c-97ef-786bd15344a0-root\") pod \"node-exporter-9s9cp\" (UID: \"c83e50d5-4354-484c-97ef-786bd15344a0\") " pod="openshift-monitoring/node-exporter-9s9cp" Apr 23 17:55:33.666053 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:33.665894 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/c83e50d5-4354-484c-97ef-786bd15344a0-node-exporter-textfile\") pod \"node-exporter-9s9cp\" (UID: \"c83e50d5-4354-484c-97ef-786bd15344a0\") " pod="openshift-monitoring/node-exporter-9s9cp" Apr 23 17:55:33.666053 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:33.666030 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c83e50d5-4354-484c-97ef-786bd15344a0-metrics-client-ca\") pod \"node-exporter-9s9cp\" (UID: \"c83e50d5-4354-484c-97ef-786bd15344a0\") " pod="openshift-monitoring/node-exporter-9s9cp" Apr 23 17:55:33.666053 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:33.666025 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/c83e50d5-4354-484c-97ef-786bd15344a0-node-exporter-accelerators-collector-config\") pod \"node-exporter-9s9cp\" (UID: \"c83e50d5-4354-484c-97ef-786bd15344a0\") " pod="openshift-monitoring/node-exporter-9s9cp" Apr 23 17:55:33.669794 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:33.669765 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c83e50d5-4354-484c-97ef-786bd15344a0-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-9s9cp\" (UID: \"c83e50d5-4354-484c-97ef-786bd15344a0\") " pod="openshift-monitoring/node-exporter-9s9cp" Apr 23 17:55:33.675755 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:33.675738 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wg55\" (UniqueName: \"kubernetes.io/projected/c83e50d5-4354-484c-97ef-786bd15344a0-kube-api-access-6wg55\") pod \"node-exporter-9s9cp\" (UID: \"c83e50d5-4354-484c-97ef-786bd15344a0\") " pod="openshift-monitoring/node-exporter-9s9cp" Apr 23 17:55:34.020043 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:34.020009 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4zn98" event={"ID":"84132246-7311-4103-a045-d865e6d62737","Type":"ContainerStarted","Data":"ff257f749f85b0e12f2dac0b0c46a96bc3b3976b6c065dbb1e01d6c2f958808b"} Apr 23 17:55:34.169702 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:34.169674 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/c83e50d5-4354-484c-97ef-786bd15344a0-node-exporter-tls\") pod \"node-exporter-9s9cp\" (UID: \"c83e50d5-4354-484c-97ef-786bd15344a0\") " pod="openshift-monitoring/node-exporter-9s9cp" Apr 23 17:55:34.171978 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:34.171950 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/c83e50d5-4354-484c-97ef-786bd15344a0-node-exporter-tls\") pod \"node-exporter-9s9cp\" (UID: \"c83e50d5-4354-484c-97ef-786bd15344a0\") " pod="openshift-monitoring/node-exporter-9s9cp" Apr 23 17:55:34.398393 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:34.398370 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-9s9cp" Apr 23 17:55:34.407106 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:55:34.407083 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc83e50d5_4354_484c_97ef_786bd15344a0.slice/crio-9178f89019c7317779ca8a3a91c61335d7addc2c4e6024be473cfd8c6d2bfe45 WatchSource:0}: Error finding container 9178f89019c7317779ca8a3a91c61335d7addc2c4e6024be473cfd8c6d2bfe45: Status 404 returned error can't find the container with id 9178f89019c7317779ca8a3a91c61335d7addc2c4e6024be473cfd8c6d2bfe45 Apr 23 17:55:35.023161 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:35.023124 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-9s9cp" event={"ID":"c83e50d5-4354-484c-97ef-786bd15344a0","Type":"ContainerStarted","Data":"9178f89019c7317779ca8a3a91c61335d7addc2c4e6024be473cfd8c6d2bfe45"} Apr 23 17:55:35.025675 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:35.025644 2574 generic.go:358] "Generic (PLEG): container finished" podID="84132246-7311-4103-a045-d865e6d62737" containerID="ff257f749f85b0e12f2dac0b0c46a96bc3b3976b6c065dbb1e01d6c2f958808b" exitCode=0 Apr 23 17:55:35.025815 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:35.025689 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4zn98" event={"ID":"84132246-7311-4103-a045-d865e6d62737","Type":"ContainerDied","Data":"ff257f749f85b0e12f2dac0b0c46a96bc3b3976b6c065dbb1e01d6c2f958808b"} Apr 23 17:55:36.030767 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:36.030732 2574 generic.go:358] "Generic (PLEG): container finished" podID="84132246-7311-4103-a045-d865e6d62737" containerID="67f84372e2f4ac717743f91e056826426cb965ea0ecc503626f69ede3698058b" exitCode=0 Apr 23 17:55:36.031410 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:36.030781 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4zn98" event={"ID":"84132246-7311-4103-a045-d865e6d62737","Type":"ContainerDied","Data":"67f84372e2f4ac717743f91e056826426cb965ea0ecc503626f69ede3698058b"} Apr 23 17:55:36.032355 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:36.032329 2574 generic.go:358] "Generic (PLEG): container finished" podID="c83e50d5-4354-484c-97ef-786bd15344a0" containerID="65a28d4a521f094873f90e17eb3868db7b4cd2b0669d2e48bbb79b83ca60c2f0" exitCode=0 Apr 23 17:55:36.032466 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:36.032370 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-9s9cp" event={"ID":"c83e50d5-4354-484c-97ef-786bd15344a0","Type":"ContainerDied","Data":"65a28d4a521f094873f90e17eb3868db7b4cd2b0669d2e48bbb79b83ca60c2f0"} Apr 23 17:55:37.036956 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.036903 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4zn98" event={"ID":"84132246-7311-4103-a045-d865e6d62737","Type":"ContainerStarted","Data":"bff62fa3588003894cb9a9cf4d760a4f7d0734b6e9cb17ebeb0875be157048dc"} Apr 23 17:55:37.038625 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.038601 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-9s9cp" event={"ID":"c83e50d5-4354-484c-97ef-786bd15344a0","Type":"ContainerStarted","Data":"84422198830bb0a85d0864c98fb053f5fa0f7619e1ee1826d28cf32de5dac6ff"} Apr 23 17:55:37.038736 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.038629 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-9s9cp" event={"ID":"c83e50d5-4354-484c-97ef-786bd15344a0","Type":"ContainerStarted","Data":"e17d7c335bdf3d306669c26f03694ceb25cee37fadd9abfc3b80f52fc1fd18af"} Apr 23 17:55:37.069552 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.069514 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-4zn98" podStartSLOduration=58.16842634 podStartE2EDuration="1m31.069501771s" podCreationTimestamp="2026-04-23 17:54:06 +0000 UTC" firstStartedPulling="2026-04-23 17:55:00.928886619 +0000 UTC m=+138.826340372" lastFinishedPulling="2026-04-23 17:55:33.829962056 +0000 UTC m=+171.727415803" observedRunningTime="2026-04-23 17:55:37.068335799 +0000 UTC m=+174.965789584" watchObservedRunningTime="2026-04-23 17:55:37.069501771 +0000 UTC m=+174.966955538" Apr 23 17:55:37.094046 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.094010 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-9s9cp" podStartSLOduration=3.153691391 podStartE2EDuration="4.093999045s" podCreationTimestamp="2026-04-23 17:55:33 +0000 UTC" firstStartedPulling="2026-04-23 17:55:34.408539462 +0000 UTC m=+172.305993210" lastFinishedPulling="2026-04-23 17:55:35.348847111 +0000 UTC m=+173.246300864" observedRunningTime="2026-04-23 17:55:37.092655436 +0000 UTC m=+174.990109207" watchObservedRunningTime="2026-04-23 17:55:37.093999045 +0000 UTC m=+174.991452814" Apr 23 17:55:37.521649 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.521622 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-106.ec2.internal" event="NodeReady" Apr 23 17:55:37.572535 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.572513 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-7dc86d8d7f-wg7qk"] Apr 23 17:55:37.596559 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.596537 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-7dc86d8d7f-wg7qk" Apr 23 17:55:37.597028 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.597011 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-7dc86d8d7f-wg7qk"] Apr 23 17:55:37.605109 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.605087 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-private-configuration\"" Apr 23 17:55:37.605214 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.605119 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Apr 23 17:55:37.605723 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.605710 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-psc88\"" Apr 23 17:55:37.605967 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.605954 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Apr 23 17:55:37.608354 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.608337 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-kms9t"] Apr 23 17:55:37.620336 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.620309 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Apr 23 17:55:37.631046 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.631023 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-n9t2k"] Apr 23 17:55:37.631178 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.631161 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-kms9t" Apr 23 17:55:37.634760 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.634729 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-f4mbl\"" Apr 23 17:55:37.637903 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.637885 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Apr 23 17:55:37.639374 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.639355 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Apr 23 17:55:37.639987 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.639967 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Apr 23 17:55:37.652886 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.652799 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-kms9t"] Apr 23 17:55:37.652965 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.652893 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-n9t2k"] Apr 23 17:55:37.652965 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.652917 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-n9t2k" Apr 23 17:55:37.657079 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.657061 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-54wff\"" Apr 23 17:55:37.657179 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.657162 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-runtime-extractor-th8sw"] Apr 23 17:55:37.657314 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.657282 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Apr 23 17:55:37.657403 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.657353 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Apr 23 17:55:37.676605 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.676568 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-th8sw" Apr 23 17:55:37.679550 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.679531 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-rbac-proxy\"" Apr 23 17:55:37.679677 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.679622 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-tls\"" Apr 23 17:55:37.679813 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.679798 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-sa-dockercfg-b7fp4\"" Apr 23 17:55:37.679872 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.679819 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"openshift-service-ca.crt\"" Apr 23 17:55:37.679872 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.679838 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-root-ca.crt\"" Apr 23 17:55:37.686474 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.686456 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-th8sw"] Apr 23 17:55:37.698197 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.698177 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/ab4270c5-eb00-4f8a-8f0e-3386237c56e1-image-registry-private-configuration\") pod \"image-registry-7dc86d8d7f-wg7qk\" (UID: \"ab4270c5-eb00-4f8a-8f0e-3386237c56e1\") " pod="openshift-image-registry/image-registry-7dc86d8d7f-wg7qk" Apr 23 17:55:37.698292 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.698210 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs4c8\" (UniqueName: \"kubernetes.io/projected/ab4270c5-eb00-4f8a-8f0e-3386237c56e1-kube-api-access-zs4c8\") pod \"image-registry-7dc86d8d7f-wg7qk\" (UID: \"ab4270c5-eb00-4f8a-8f0e-3386237c56e1\") " pod="openshift-image-registry/image-registry-7dc86d8d7f-wg7qk" Apr 23 17:55:37.698292 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.698259 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ab4270c5-eb00-4f8a-8f0e-3386237c56e1-ca-trust-extracted\") pod \"image-registry-7dc86d8d7f-wg7qk\" (UID: \"ab4270c5-eb00-4f8a-8f0e-3386237c56e1\") " pod="openshift-image-registry/image-registry-7dc86d8d7f-wg7qk" Apr 23 17:55:37.698377 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.698324 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ab4270c5-eb00-4f8a-8f0e-3386237c56e1-registry-certificates\") pod \"image-registry-7dc86d8d7f-wg7qk\" (UID: \"ab4270c5-eb00-4f8a-8f0e-3386237c56e1\") " pod="openshift-image-registry/image-registry-7dc86d8d7f-wg7qk" Apr 23 17:55:37.698377 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.698355 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ab4270c5-eb00-4f8a-8f0e-3386237c56e1-trusted-ca\") pod \"image-registry-7dc86d8d7f-wg7qk\" (UID: \"ab4270c5-eb00-4f8a-8f0e-3386237c56e1\") " pod="openshift-image-registry/image-registry-7dc86d8d7f-wg7qk" Apr 23 17:55:37.698469 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.698374 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ab4270c5-eb00-4f8a-8f0e-3386237c56e1-registry-tls\") pod \"image-registry-7dc86d8d7f-wg7qk\" (UID: \"ab4270c5-eb00-4f8a-8f0e-3386237c56e1\") " pod="openshift-image-registry/image-registry-7dc86d8d7f-wg7qk" Apr 23 17:55:37.698469 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.698391 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ab4270c5-eb00-4f8a-8f0e-3386237c56e1-installation-pull-secrets\") pod \"image-registry-7dc86d8d7f-wg7qk\" (UID: \"ab4270c5-eb00-4f8a-8f0e-3386237c56e1\") " pod="openshift-image-registry/image-registry-7dc86d8d7f-wg7qk" Apr 23 17:55:37.698469 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.698438 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ab4270c5-eb00-4f8a-8f0e-3386237c56e1-bound-sa-token\") pod \"image-registry-7dc86d8d7f-wg7qk\" (UID: \"ab4270c5-eb00-4f8a-8f0e-3386237c56e1\") " pod="openshift-image-registry/image-registry-7dc86d8d7f-wg7qk" Apr 23 17:55:37.787625 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.787556 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-574f7989c4-mftsr"] Apr 23 17:55:37.799077 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.799056 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/63127b59-6d72-4d18-85c3-8766abc25908-cert\") pod \"ingress-canary-kms9t\" (UID: \"63127b59-6d72-4d18-85c3-8766abc25908\") " pod="openshift-ingress-canary/ingress-canary-kms9t" Apr 23 17:55:37.799170 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.799095 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/d7a76e75-dee9-437f-afaf-611235bcda31-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-th8sw\" (UID: \"d7a76e75-dee9-437f-afaf-611235bcda31\") " pod="openshift-insights/insights-runtime-extractor-th8sw" Apr 23 17:55:37.799170 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.799121 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zs4c8\" (UniqueName: \"kubernetes.io/projected/ab4270c5-eb00-4f8a-8f0e-3386237c56e1-kube-api-access-zs4c8\") pod \"image-registry-7dc86d8d7f-wg7qk\" (UID: \"ab4270c5-eb00-4f8a-8f0e-3386237c56e1\") " pod="openshift-image-registry/image-registry-7dc86d8d7f-wg7qk" Apr 23 17:55:37.799170 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.799144 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c0717f3c-f89c-4cad-a2c7-5e017bcc9292-metrics-tls\") pod \"dns-default-n9t2k\" (UID: \"c0717f3c-f89c-4cad-a2c7-5e017bcc9292\") " pod="openshift-dns/dns-default-n9t2k" Apr 23 17:55:37.799353 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.799210 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/d7a76e75-dee9-437f-afaf-611235bcda31-crio-socket\") pod \"insights-runtime-extractor-th8sw\" (UID: \"d7a76e75-dee9-437f-afaf-611235bcda31\") " pod="openshift-insights/insights-runtime-extractor-th8sw" Apr 23 17:55:37.799353 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.799256 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h8xm\" (UniqueName: \"kubernetes.io/projected/c0717f3c-f89c-4cad-a2c7-5e017bcc9292-kube-api-access-6h8xm\") pod \"dns-default-n9t2k\" (UID: \"c0717f3c-f89c-4cad-a2c7-5e017bcc9292\") " pod="openshift-dns/dns-default-n9t2k" Apr 23 17:55:37.799353 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.799288 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ab4270c5-eb00-4f8a-8f0e-3386237c56e1-registry-certificates\") pod \"image-registry-7dc86d8d7f-wg7qk\" (UID: \"ab4270c5-eb00-4f8a-8f0e-3386237c56e1\") " pod="openshift-image-registry/image-registry-7dc86d8d7f-wg7qk" Apr 23 17:55:37.799353 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.799317 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c0717f3c-f89c-4cad-a2c7-5e017bcc9292-tmp-dir\") pod \"dns-default-n9t2k\" (UID: \"c0717f3c-f89c-4cad-a2c7-5e017bcc9292\") " pod="openshift-dns/dns-default-n9t2k" Apr 23 17:55:37.799353 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.799350 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ab4270c5-eb00-4f8a-8f0e-3386237c56e1-trusted-ca\") pod \"image-registry-7dc86d8d7f-wg7qk\" (UID: \"ab4270c5-eb00-4f8a-8f0e-3386237c56e1\") " pod="openshift-image-registry/image-registry-7dc86d8d7f-wg7qk" Apr 23 17:55:37.799553 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.799378 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2wcs\" (UniqueName: \"kubernetes.io/projected/63127b59-6d72-4d18-85c3-8766abc25908-kube-api-access-b2wcs\") pod \"ingress-canary-kms9t\" (UID: \"63127b59-6d72-4d18-85c3-8766abc25908\") " pod="openshift-ingress-canary/ingress-canary-kms9t" Apr 23 17:55:37.799553 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.799410 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/ab4270c5-eb00-4f8a-8f0e-3386237c56e1-image-registry-private-configuration\") pod \"image-registry-7dc86d8d7f-wg7qk\" (UID: \"ab4270c5-eb00-4f8a-8f0e-3386237c56e1\") " pod="openshift-image-registry/image-registry-7dc86d8d7f-wg7qk" Apr 23 17:55:37.799553 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.799448 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/d7a76e75-dee9-437f-afaf-611235bcda31-data-volume\") pod \"insights-runtime-extractor-th8sw\" (UID: \"d7a76e75-dee9-437f-afaf-611235bcda31\") " pod="openshift-insights/insights-runtime-extractor-th8sw" Apr 23 17:55:37.800011 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.799983 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/d7a76e75-dee9-437f-afaf-611235bcda31-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-th8sw\" (UID: \"d7a76e75-dee9-437f-afaf-611235bcda31\") " pod="openshift-insights/insights-runtime-extractor-th8sw" Apr 23 17:55:37.800265 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.800224 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ab4270c5-eb00-4f8a-8f0e-3386237c56e1-ca-trust-extracted\") pod \"image-registry-7dc86d8d7f-wg7qk\" (UID: \"ab4270c5-eb00-4f8a-8f0e-3386237c56e1\") " pod="openshift-image-registry/image-registry-7dc86d8d7f-wg7qk" Apr 23 17:55:37.800364 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.800309 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqc57\" (UniqueName: \"kubernetes.io/projected/d7a76e75-dee9-437f-afaf-611235bcda31-kube-api-access-gqc57\") pod \"insights-runtime-extractor-th8sw\" (UID: \"d7a76e75-dee9-437f-afaf-611235bcda31\") " pod="openshift-insights/insights-runtime-extractor-th8sw" Apr 23 17:55:37.800364 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.800357 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0717f3c-f89c-4cad-a2c7-5e017bcc9292-config-volume\") pod \"dns-default-n9t2k\" (UID: \"c0717f3c-f89c-4cad-a2c7-5e017bcc9292\") " pod="openshift-dns/dns-default-n9t2k" Apr 23 17:55:37.800461 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.800402 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ab4270c5-eb00-4f8a-8f0e-3386237c56e1-installation-pull-secrets\") pod \"image-registry-7dc86d8d7f-wg7qk\" (UID: \"ab4270c5-eb00-4f8a-8f0e-3386237c56e1\") " pod="openshift-image-registry/image-registry-7dc86d8d7f-wg7qk" Apr 23 17:55:37.800461 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.800440 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ab4270c5-eb00-4f8a-8f0e-3386237c56e1-registry-tls\") pod \"image-registry-7dc86d8d7f-wg7qk\" (UID: \"ab4270c5-eb00-4f8a-8f0e-3386237c56e1\") " pod="openshift-image-registry/image-registry-7dc86d8d7f-wg7qk" Apr 23 17:55:37.800558 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.800475 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ab4270c5-eb00-4f8a-8f0e-3386237c56e1-bound-sa-token\") pod \"image-registry-7dc86d8d7f-wg7qk\" (UID: \"ab4270c5-eb00-4f8a-8f0e-3386237c56e1\") " pod="openshift-image-registry/image-registry-7dc86d8d7f-wg7qk" Apr 23 17:55:37.800558 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.800495 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ab4270c5-eb00-4f8a-8f0e-3386237c56e1-ca-trust-extracted\") pod \"image-registry-7dc86d8d7f-wg7qk\" (UID: \"ab4270c5-eb00-4f8a-8f0e-3386237c56e1\") " pod="openshift-image-registry/image-registry-7dc86d8d7f-wg7qk" Apr 23 17:55:37.800660 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.800595 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ab4270c5-eb00-4f8a-8f0e-3386237c56e1-registry-certificates\") pod \"image-registry-7dc86d8d7f-wg7qk\" (UID: \"ab4270c5-eb00-4f8a-8f0e-3386237c56e1\") " pod="openshift-image-registry/image-registry-7dc86d8d7f-wg7qk" Apr 23 17:55:37.801362 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.801336 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ab4270c5-eb00-4f8a-8f0e-3386237c56e1-trusted-ca\") pod \"image-registry-7dc86d8d7f-wg7qk\" (UID: \"ab4270c5-eb00-4f8a-8f0e-3386237c56e1\") " pod="openshift-image-registry/image-registry-7dc86d8d7f-wg7qk" Apr 23 17:55:37.804413 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.803326 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/ab4270c5-eb00-4f8a-8f0e-3386237c56e1-image-registry-private-configuration\") pod \"image-registry-7dc86d8d7f-wg7qk\" (UID: \"ab4270c5-eb00-4f8a-8f0e-3386237c56e1\") " pod="openshift-image-registry/image-registry-7dc86d8d7f-wg7qk" Apr 23 17:55:37.804413 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.803540 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ab4270c5-eb00-4f8a-8f0e-3386237c56e1-installation-pull-secrets\") pod \"image-registry-7dc86d8d7f-wg7qk\" (UID: \"ab4270c5-eb00-4f8a-8f0e-3386237c56e1\") " pod="openshift-image-registry/image-registry-7dc86d8d7f-wg7qk" Apr 23 17:55:37.805754 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.805731 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ab4270c5-eb00-4f8a-8f0e-3386237c56e1-registry-tls\") pod \"image-registry-7dc86d8d7f-wg7qk\" (UID: \"ab4270c5-eb00-4f8a-8f0e-3386237c56e1\") " pod="openshift-image-registry/image-registry-7dc86d8d7f-wg7qk" Apr 23 17:55:37.809565 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.809542 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs4c8\" (UniqueName: \"kubernetes.io/projected/ab4270c5-eb00-4f8a-8f0e-3386237c56e1-kube-api-access-zs4c8\") pod \"image-registry-7dc86d8d7f-wg7qk\" (UID: \"ab4270c5-eb00-4f8a-8f0e-3386237c56e1\") " pod="openshift-image-registry/image-registry-7dc86d8d7f-wg7qk" Apr 23 17:55:37.810819 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.810798 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ab4270c5-eb00-4f8a-8f0e-3386237c56e1-bound-sa-token\") pod \"image-registry-7dc86d8d7f-wg7qk\" (UID: \"ab4270c5-eb00-4f8a-8f0e-3386237c56e1\") " pod="openshift-image-registry/image-registry-7dc86d8d7f-wg7qk" Apr 23 17:55:37.813095 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.813074 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-574f7989c4-mftsr"] Apr 23 17:55:37.813200 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.813165 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-574f7989c4-mftsr" Apr 23 17:55:37.815828 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.815807 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"metrics-server-audit-profiles\"" Apr 23 17:55:37.815930 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.815834 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"metrics-server-client-certs\"" Apr 23 17:55:37.815930 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.815852 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kubelet-serving-ca-bundle\"" Apr 23 17:55:37.816233 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.816218 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"metrics-server-dockercfg-ct7bn\"" Apr 23 17:55:37.816335 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.816302 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"metrics-server-tls\"" Apr 23 17:55:37.816399 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.816371 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"metrics-server-612gon517h60c\"" Apr 23 17:55:37.900989 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.900965 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c0717f3c-f89c-4cad-a2c7-5e017bcc9292-tmp-dir\") pod \"dns-default-n9t2k\" (UID: \"c0717f3c-f89c-4cad-a2c7-5e017bcc9292\") " pod="openshift-dns/dns-default-n9t2k" Apr 23 17:55:37.901073 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.900992 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b2wcs\" (UniqueName: \"kubernetes.io/projected/63127b59-6d72-4d18-85c3-8766abc25908-kube-api-access-b2wcs\") pod \"ingress-canary-kms9t\" (UID: \"63127b59-6d72-4d18-85c3-8766abc25908\") " pod="openshift-ingress-canary/ingress-canary-kms9t" Apr 23 17:55:37.901073 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.901021 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/d7a76e75-dee9-437f-afaf-611235bcda31-data-volume\") pod \"insights-runtime-extractor-th8sw\" (UID: \"d7a76e75-dee9-437f-afaf-611235bcda31\") " pod="openshift-insights/insights-runtime-extractor-th8sw" Apr 23 17:55:37.901073 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.901035 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/d7a76e75-dee9-437f-afaf-611235bcda31-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-th8sw\" (UID: \"d7a76e75-dee9-437f-afaf-611235bcda31\") " pod="openshift-insights/insights-runtime-extractor-th8sw" Apr 23 17:55:37.901236 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.901103 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f448\" (UniqueName: \"kubernetes.io/projected/61cf4b7e-bc78-4b06-a4ed-bbbcd8031991-kube-api-access-8f448\") pod \"metrics-server-574f7989c4-mftsr\" (UID: \"61cf4b7e-bc78-4b06-a4ed-bbbcd8031991\") " pod="openshift-monitoring/metrics-server-574f7989c4-mftsr" Apr 23 17:55:37.901236 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.901133 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gqc57\" (UniqueName: \"kubernetes.io/projected/d7a76e75-dee9-437f-afaf-611235bcda31-kube-api-access-gqc57\") pod \"insights-runtime-extractor-th8sw\" (UID: \"d7a76e75-dee9-437f-afaf-611235bcda31\") " pod="openshift-insights/insights-runtime-extractor-th8sw" Apr 23 17:55:37.901236 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.901158 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0717f3c-f89c-4cad-a2c7-5e017bcc9292-config-volume\") pod \"dns-default-n9t2k\" (UID: \"c0717f3c-f89c-4cad-a2c7-5e017bcc9292\") " pod="openshift-dns/dns-default-n9t2k" Apr 23 17:55:37.901236 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.901222 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61cf4b7e-bc78-4b06-a4ed-bbbcd8031991-client-ca-bundle\") pod \"metrics-server-574f7989c4-mftsr\" (UID: \"61cf4b7e-bc78-4b06-a4ed-bbbcd8031991\") " pod="openshift-monitoring/metrics-server-574f7989c4-mftsr" Apr 23 17:55:37.901420 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.901270 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/63127b59-6d72-4d18-85c3-8766abc25908-cert\") pod \"ingress-canary-kms9t\" (UID: \"63127b59-6d72-4d18-85c3-8766abc25908\") " pod="openshift-ingress-canary/ingress-canary-kms9t" Apr 23 17:55:37.901420 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.901298 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61cf4b7e-bc78-4b06-a4ed-bbbcd8031991-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-574f7989c4-mftsr\" (UID: \"61cf4b7e-bc78-4b06-a4ed-bbbcd8031991\") " pod="openshift-monitoring/metrics-server-574f7989c4-mftsr" Apr 23 17:55:37.901420 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.901332 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/d7a76e75-dee9-437f-afaf-611235bcda31-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-th8sw\" (UID: \"d7a76e75-dee9-437f-afaf-611235bcda31\") " pod="openshift-insights/insights-runtime-extractor-th8sw" Apr 23 17:55:37.901420 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.901341 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c0717f3c-f89c-4cad-a2c7-5e017bcc9292-tmp-dir\") pod \"dns-default-n9t2k\" (UID: \"c0717f3c-f89c-4cad-a2c7-5e017bcc9292\") " pod="openshift-dns/dns-default-n9t2k" Apr 23 17:55:37.901420 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.901359 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-client-certs\" (UniqueName: \"kubernetes.io/secret/61cf4b7e-bc78-4b06-a4ed-bbbcd8031991-secret-metrics-server-client-certs\") pod \"metrics-server-574f7989c4-mftsr\" (UID: \"61cf4b7e-bc78-4b06-a4ed-bbbcd8031991\") " pod="openshift-monitoring/metrics-server-574f7989c4-mftsr" Apr 23 17:55:37.901420 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.901388 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c0717f3c-f89c-4cad-a2c7-5e017bcc9292-metrics-tls\") pod \"dns-default-n9t2k\" (UID: \"c0717f3c-f89c-4cad-a2c7-5e017bcc9292\") " pod="openshift-dns/dns-default-n9t2k" Apr 23 17:55:37.901420 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.901412 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/d7a76e75-dee9-437f-afaf-611235bcda31-data-volume\") pod \"insights-runtime-extractor-th8sw\" (UID: \"d7a76e75-dee9-437f-afaf-611235bcda31\") " pod="openshift-insights/insights-runtime-extractor-th8sw" Apr 23 17:55:37.901420 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.901419 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/61cf4b7e-bc78-4b06-a4ed-bbbcd8031991-secret-metrics-server-tls\") pod \"metrics-server-574f7989c4-mftsr\" (UID: \"61cf4b7e-bc78-4b06-a4ed-bbbcd8031991\") " pod="openshift-monitoring/metrics-server-574f7989c4-mftsr" Apr 23 17:55:37.901823 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.901446 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/61cf4b7e-bc78-4b06-a4ed-bbbcd8031991-metrics-server-audit-profiles\") pod \"metrics-server-574f7989c4-mftsr\" (UID: \"61cf4b7e-bc78-4b06-a4ed-bbbcd8031991\") " pod="openshift-monitoring/metrics-server-574f7989c4-mftsr" Apr 23 17:55:37.901823 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.901477 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/d7a76e75-dee9-437f-afaf-611235bcda31-crio-socket\") pod \"insights-runtime-extractor-th8sw\" (UID: \"d7a76e75-dee9-437f-afaf-611235bcda31\") " pod="openshift-insights/insights-runtime-extractor-th8sw" Apr 23 17:55:37.901823 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.901502 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/61cf4b7e-bc78-4b06-a4ed-bbbcd8031991-audit-log\") pod \"metrics-server-574f7989c4-mftsr\" (UID: \"61cf4b7e-bc78-4b06-a4ed-bbbcd8031991\") " pod="openshift-monitoring/metrics-server-574f7989c4-mftsr" Apr 23 17:55:37.901823 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.901532 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6h8xm\" (UniqueName: \"kubernetes.io/projected/c0717f3c-f89c-4cad-a2c7-5e017bcc9292-kube-api-access-6h8xm\") pod \"dns-default-n9t2k\" (UID: \"c0717f3c-f89c-4cad-a2c7-5e017bcc9292\") " pod="openshift-dns/dns-default-n9t2k" Apr 23 17:55:37.901823 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.901558 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/d7a76e75-dee9-437f-afaf-611235bcda31-crio-socket\") pod \"insights-runtime-extractor-th8sw\" (UID: \"d7a76e75-dee9-437f-afaf-611235bcda31\") " pod="openshift-insights/insights-runtime-extractor-th8sw" Apr 23 17:55:37.901823 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.901738 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0717f3c-f89c-4cad-a2c7-5e017bcc9292-config-volume\") pod \"dns-default-n9t2k\" (UID: \"c0717f3c-f89c-4cad-a2c7-5e017bcc9292\") " pod="openshift-dns/dns-default-n9t2k" Apr 23 17:55:37.901823 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.901780 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/d7a76e75-dee9-437f-afaf-611235bcda31-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-th8sw\" (UID: \"d7a76e75-dee9-437f-afaf-611235bcda31\") " pod="openshift-insights/insights-runtime-extractor-th8sw" Apr 23 17:55:37.903477 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.903448 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/d7a76e75-dee9-437f-afaf-611235bcda31-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-th8sw\" (UID: \"d7a76e75-dee9-437f-afaf-611235bcda31\") " pod="openshift-insights/insights-runtime-extractor-th8sw" Apr 23 17:55:37.903706 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.903689 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c0717f3c-f89c-4cad-a2c7-5e017bcc9292-metrics-tls\") pod \"dns-default-n9t2k\" (UID: \"c0717f3c-f89c-4cad-a2c7-5e017bcc9292\") " pod="openshift-dns/dns-default-n9t2k" Apr 23 17:55:37.905656 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.905637 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-7dc86d8d7f-wg7qk" Apr 23 17:55:37.914324 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.914303 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h8xm\" (UniqueName: \"kubernetes.io/projected/c0717f3c-f89c-4cad-a2c7-5e017bcc9292-kube-api-access-6h8xm\") pod \"dns-default-n9t2k\" (UID: \"c0717f3c-f89c-4cad-a2c7-5e017bcc9292\") " pod="openshift-dns/dns-default-n9t2k" Apr 23 17:55:37.919841 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.919819 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/63127b59-6d72-4d18-85c3-8766abc25908-cert\") pod \"ingress-canary-kms9t\" (UID: \"63127b59-6d72-4d18-85c3-8766abc25908\") " pod="openshift-ingress-canary/ingress-canary-kms9t" Apr 23 17:55:37.921814 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.921791 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2wcs\" (UniqueName: \"kubernetes.io/projected/63127b59-6d72-4d18-85c3-8766abc25908-kube-api-access-b2wcs\") pod \"ingress-canary-kms9t\" (UID: \"63127b59-6d72-4d18-85c3-8766abc25908\") " pod="openshift-ingress-canary/ingress-canary-kms9t" Apr 23 17:55:37.926802 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.926783 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqc57\" (UniqueName: \"kubernetes.io/projected/d7a76e75-dee9-437f-afaf-611235bcda31-kube-api-access-gqc57\") pod \"insights-runtime-extractor-th8sw\" (UID: \"d7a76e75-dee9-437f-afaf-611235bcda31\") " pod="openshift-insights/insights-runtime-extractor-th8sw" Apr 23 17:55:37.940432 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.940409 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-kms9t" Apr 23 17:55:37.961258 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.961219 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-n9t2k" Apr 23 17:55:37.985775 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:37.985748 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-th8sw" Apr 23 17:55:38.002917 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.002880 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8f448\" (UniqueName: \"kubernetes.io/projected/61cf4b7e-bc78-4b06-a4ed-bbbcd8031991-kube-api-access-8f448\") pod \"metrics-server-574f7989c4-mftsr\" (UID: \"61cf4b7e-bc78-4b06-a4ed-bbbcd8031991\") " pod="openshift-monitoring/metrics-server-574f7989c4-mftsr" Apr 23 17:55:38.003041 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.002950 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61cf4b7e-bc78-4b06-a4ed-bbbcd8031991-client-ca-bundle\") pod \"metrics-server-574f7989c4-mftsr\" (UID: \"61cf4b7e-bc78-4b06-a4ed-bbbcd8031991\") " pod="openshift-monitoring/metrics-server-574f7989c4-mftsr" Apr 23 17:55:38.003041 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.002996 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61cf4b7e-bc78-4b06-a4ed-bbbcd8031991-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-574f7989c4-mftsr\" (UID: \"61cf4b7e-bc78-4b06-a4ed-bbbcd8031991\") " pod="openshift-monitoring/metrics-server-574f7989c4-mftsr" Apr 23 17:55:38.003041 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.003028 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-metrics-server-client-certs\" (UniqueName: \"kubernetes.io/secret/61cf4b7e-bc78-4b06-a4ed-bbbcd8031991-secret-metrics-server-client-certs\") pod \"metrics-server-574f7989c4-mftsr\" (UID: \"61cf4b7e-bc78-4b06-a4ed-bbbcd8031991\") " pod="openshift-monitoring/metrics-server-574f7989c4-mftsr" Apr 23 17:55:38.003206 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.003069 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/61cf4b7e-bc78-4b06-a4ed-bbbcd8031991-secret-metrics-server-tls\") pod \"metrics-server-574f7989c4-mftsr\" (UID: \"61cf4b7e-bc78-4b06-a4ed-bbbcd8031991\") " pod="openshift-monitoring/metrics-server-574f7989c4-mftsr" Apr 23 17:55:38.003206 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.003095 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/61cf4b7e-bc78-4b06-a4ed-bbbcd8031991-metrics-server-audit-profiles\") pod \"metrics-server-574f7989c4-mftsr\" (UID: \"61cf4b7e-bc78-4b06-a4ed-bbbcd8031991\") " pod="openshift-monitoring/metrics-server-574f7989c4-mftsr" Apr 23 17:55:38.003206 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.003128 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/61cf4b7e-bc78-4b06-a4ed-bbbcd8031991-audit-log\") pod \"metrics-server-574f7989c4-mftsr\" (UID: \"61cf4b7e-bc78-4b06-a4ed-bbbcd8031991\") " pod="openshift-monitoring/metrics-server-574f7989c4-mftsr" Apr 23 17:55:38.006452 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.003645 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/61cf4b7e-bc78-4b06-a4ed-bbbcd8031991-audit-log\") pod \"metrics-server-574f7989c4-mftsr\" (UID: \"61cf4b7e-bc78-4b06-a4ed-bbbcd8031991\") " pod="openshift-monitoring/metrics-server-574f7989c4-mftsr" Apr 23 17:55:38.006452 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.005263 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61cf4b7e-bc78-4b06-a4ed-bbbcd8031991-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-574f7989c4-mftsr\" (UID: \"61cf4b7e-bc78-4b06-a4ed-bbbcd8031991\") " pod="openshift-monitoring/metrics-server-574f7989c4-mftsr" Apr 23 17:55:38.008564 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.008519 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/61cf4b7e-bc78-4b06-a4ed-bbbcd8031991-secret-metrics-server-tls\") pod \"metrics-server-574f7989c4-mftsr\" (UID: \"61cf4b7e-bc78-4b06-a4ed-bbbcd8031991\") " pod="openshift-monitoring/metrics-server-574f7989c4-mftsr" Apr 23 17:55:38.011141 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.011113 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61cf4b7e-bc78-4b06-a4ed-bbbcd8031991-client-ca-bundle\") pod \"metrics-server-574f7989c4-mftsr\" (UID: \"61cf4b7e-bc78-4b06-a4ed-bbbcd8031991\") " pod="openshift-monitoring/metrics-server-574f7989c4-mftsr" Apr 23 17:55:38.014596 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.014522 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/61cf4b7e-bc78-4b06-a4ed-bbbcd8031991-metrics-server-audit-profiles\") pod \"metrics-server-574f7989c4-mftsr\" (UID: \"61cf4b7e-bc78-4b06-a4ed-bbbcd8031991\") " pod="openshift-monitoring/metrics-server-574f7989c4-mftsr" Apr 23 17:55:38.016722 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.016676 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f448\" (UniqueName: \"kubernetes.io/projected/61cf4b7e-bc78-4b06-a4ed-bbbcd8031991-kube-api-access-8f448\") pod \"metrics-server-574f7989c4-mftsr\" (UID: \"61cf4b7e-bc78-4b06-a4ed-bbbcd8031991\") " pod="openshift-monitoring/metrics-server-574f7989c4-mftsr" Apr 23 17:55:38.018700 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.017288 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-client-certs\" (UniqueName: \"kubernetes.io/secret/61cf4b7e-bc78-4b06-a4ed-bbbcd8031991-secret-metrics-server-client-certs\") pod \"metrics-server-574f7989c4-mftsr\" (UID: \"61cf4b7e-bc78-4b06-a4ed-bbbcd8031991\") " pod="openshift-monitoring/metrics-server-574f7989c4-mftsr" Apr 23 17:55:38.097934 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.097857 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-7dc86d8d7f-wg7qk"] Apr 23 17:55:38.103294 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:55:38.103231 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab4270c5_eb00_4f8a_8f0e_3386237c56e1.slice/crio-55ca4a6d60e7365a0935aae92130f9f48079e458eabfacb9eedebf7fc3d71d58 WatchSource:0}: Error finding container 55ca4a6d60e7365a0935aae92130f9f48079e458eabfacb9eedebf7fc3d71d58: Status 404 returned error can't find the container with id 55ca4a6d60e7365a0935aae92130f9f48079e458eabfacb9eedebf7fc3d71d58 Apr 23 17:55:38.121434 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.121405 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-kms9t"] Apr 23 17:55:38.126915 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.126339 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-574f7989c4-mftsr" Apr 23 17:55:38.127455 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:55:38.127420 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63127b59_6d72_4d18_85c3_8766abc25908.slice/crio-330b1436c221ba8c752343ead9aea55f0c0f611ade6f42e7db5050e28cf514fd WatchSource:0}: Error finding container 330b1436c221ba8c752343ead9aea55f0c0f611ade6f42e7db5050e28cf514fd: Status 404 returned error can't find the container with id 330b1436c221ba8c752343ead9aea55f0c0f611ade6f42e7db5050e28cf514fd Apr 23 17:55:38.146116 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.146081 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-n9t2k"] Apr 23 17:55:38.150103 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:55:38.150018 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0717f3c_f89c_4cad_a2c7_5e017bcc9292.slice/crio-2a6806e5121c919506c4c01c2b06833dd326e1ff236571ebbe0d0da769b0f2bd WatchSource:0}: Error finding container 2a6806e5121c919506c4c01c2b06833dd326e1ff236571ebbe0d0da769b0f2bd: Status 404 returned error can't find the container with id 2a6806e5121c919506c4c01c2b06833dd326e1ff236571ebbe0d0da769b0f2bd Apr 23 17:55:38.163177 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.163140 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-th8sw"] Apr 23 17:55:38.201541 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.201492 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-7dccd58f55-hb655"] Apr 23 17:55:38.209284 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.209262 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-7dccd58f55-hb655" Apr 23 17:55:38.211774 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.211756 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"default-dockercfg-zqgfs\"" Apr 23 17:55:38.211885 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.211757 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"monitoring-plugin-cert\"" Apr 23 17:55:38.215346 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.215301 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-7dccd58f55-hb655"] Apr 23 17:55:38.250822 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.250707 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-574f7989c4-mftsr"] Apr 23 17:55:38.253279 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:55:38.253256 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61cf4b7e_bc78_4b06_a4ed_bbbcd8031991.slice/crio-58a66a558e7dc32533885552862c46b7f3f6e6887d2a80c38a3b99589598c713 WatchSource:0}: Error finding container 58a66a558e7dc32533885552862c46b7f3f6e6887d2a80c38a3b99589598c713: Status 404 returned error can't find the container with id 58a66a558e7dc32533885552862c46b7f3f6e6887d2a80c38a3b99589598c713 Apr 23 17:55:38.306567 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.306506 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/e9b2584d-b2ad-4cda-af50-a4d6572658b0-monitoring-plugin-cert\") pod \"monitoring-plugin-7dccd58f55-hb655\" (UID: \"e9b2584d-b2ad-4cda-af50-a4d6572658b0\") " pod="openshift-monitoring/monitoring-plugin-7dccd58f55-hb655" Apr 23 17:55:38.407189 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.407159 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/e9b2584d-b2ad-4cda-af50-a4d6572658b0-monitoring-plugin-cert\") pod \"monitoring-plugin-7dccd58f55-hb655\" (UID: \"e9b2584d-b2ad-4cda-af50-a4d6572658b0\") " pod="openshift-monitoring/monitoring-plugin-7dccd58f55-hb655" Apr 23 17:55:38.407331 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:38.407313 2574 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: secret "monitoring-plugin-cert" not found Apr 23 17:55:38.407388 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:55:38.407378 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9b2584d-b2ad-4cda-af50-a4d6572658b0-monitoring-plugin-cert podName:e9b2584d-b2ad-4cda-af50-a4d6572658b0 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:38.907361141 +0000 UTC m=+176.804814889 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/e9b2584d-b2ad-4cda-af50-a4d6572658b0-monitoring-plugin-cert") pod "monitoring-plugin-7dccd58f55-hb655" (UID: "e9b2584d-b2ad-4cda-af50-a4d6572658b0") : secret "monitoring-plugin-cert" not found Apr 23 17:55:38.672028 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.671943 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-666ccd8c8f-k7gzj"] Apr 23 17:55:38.674989 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.674630 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-666ccd8c8f-k7gzj" Apr 23 17:55:38.677519 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.677494 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"federate-client-certs\"" Apr 23 17:55:38.678107 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.678083 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"telemeter-client-kube-rbac-proxy-config\"" Apr 23 17:55:38.679033 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.679012 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"telemeter-client-tls\"" Apr 23 17:55:38.679419 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.679399 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"telemeter-client-serving-certs-ca-bundle\"" Apr 23 17:55:38.679748 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.679726 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"telemeter-client-dockercfg-7m5xn\"" Apr 23 17:55:38.679839 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.679828 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"telemeter-client\"" Apr 23 17:55:38.692782 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.692274 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-666ccd8c8f-k7gzj"] Apr 23 17:55:38.695677 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.695654 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"telemeter-trusted-ca-bundle-8i12ta5c71j38\"" Apr 23 17:55:38.810877 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.810836 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6628d669-5605-4946-ad62-a1f2c5adce5c-metrics-client-ca\") pod \"telemeter-client-666ccd8c8f-k7gzj\" (UID: \"6628d669-5605-4946-ad62-a1f2c5adce5c\") " pod="openshift-monitoring/telemeter-client-666ccd8c8f-k7gzj" Apr 23 17:55:38.811017 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.810942 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/6628d669-5605-4946-ad62-a1f2c5adce5c-federate-client-tls\") pod \"telemeter-client-666ccd8c8f-k7gzj\" (UID: \"6628d669-5605-4946-ad62-a1f2c5adce5c\") " pod="openshift-monitoring/telemeter-client-666ccd8c8f-k7gzj" Apr 23 17:55:38.811017 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.810974 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/6628d669-5605-4946-ad62-a1f2c5adce5c-telemeter-client-tls\") pod \"telemeter-client-666ccd8c8f-k7gzj\" (UID: \"6628d669-5605-4946-ad62-a1f2c5adce5c\") " pod="openshift-monitoring/telemeter-client-666ccd8c8f-k7gzj" Apr 23 17:55:38.811017 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.811011 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/6628d669-5605-4946-ad62-a1f2c5adce5c-secret-telemeter-client\") pod \"telemeter-client-666ccd8c8f-k7gzj\" (UID: \"6628d669-5605-4946-ad62-a1f2c5adce5c\") " pod="openshift-monitoring/telemeter-client-666ccd8c8f-k7gzj" Apr 23 17:55:38.811187 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.811045 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsn5s\" (UniqueName: \"kubernetes.io/projected/6628d669-5605-4946-ad62-a1f2c5adce5c-kube-api-access-gsn5s\") pod \"telemeter-client-666ccd8c8f-k7gzj\" (UID: \"6628d669-5605-4946-ad62-a1f2c5adce5c\") " pod="openshift-monitoring/telemeter-client-666ccd8c8f-k7gzj" Apr 23 17:55:38.811342 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.811253 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6628d669-5605-4946-ad62-a1f2c5adce5c-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-666ccd8c8f-k7gzj\" (UID: \"6628d669-5605-4946-ad62-a1f2c5adce5c\") " pod="openshift-monitoring/telemeter-client-666ccd8c8f-k7gzj" Apr 23 17:55:38.811342 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.811312 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6628d669-5605-4946-ad62-a1f2c5adce5c-serving-certs-ca-bundle\") pod \"telemeter-client-666ccd8c8f-k7gzj\" (UID: \"6628d669-5605-4946-ad62-a1f2c5adce5c\") " pod="openshift-monitoring/telemeter-client-666ccd8c8f-k7gzj" Apr 23 17:55:38.811521 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.811399 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6628d669-5605-4946-ad62-a1f2c5adce5c-telemeter-trusted-ca-bundle\") pod \"telemeter-client-666ccd8c8f-k7gzj\" (UID: \"6628d669-5605-4946-ad62-a1f2c5adce5c\") " pod="openshift-monitoring/telemeter-client-666ccd8c8f-k7gzj" Apr 23 17:55:38.912816 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.912763 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6628d669-5605-4946-ad62-a1f2c5adce5c-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-666ccd8c8f-k7gzj\" (UID: \"6628d669-5605-4946-ad62-a1f2c5adce5c\") " pod="openshift-monitoring/telemeter-client-666ccd8c8f-k7gzj" Apr 23 17:55:38.912913 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.912867 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6628d669-5605-4946-ad62-a1f2c5adce5c-serving-certs-ca-bundle\") pod \"telemeter-client-666ccd8c8f-k7gzj\" (UID: \"6628d669-5605-4946-ad62-a1f2c5adce5c\") " pod="openshift-monitoring/telemeter-client-666ccd8c8f-k7gzj" Apr 23 17:55:38.912982 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.912928 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6628d669-5605-4946-ad62-a1f2c5adce5c-telemeter-trusted-ca-bundle\") pod \"telemeter-client-666ccd8c8f-k7gzj\" (UID: \"6628d669-5605-4946-ad62-a1f2c5adce5c\") " pod="openshift-monitoring/telemeter-client-666ccd8c8f-k7gzj" Apr 23 17:55:38.912982 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.912959 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6628d669-5605-4946-ad62-a1f2c5adce5c-metrics-client-ca\") pod \"telemeter-client-666ccd8c8f-k7gzj\" (UID: \"6628d669-5605-4946-ad62-a1f2c5adce5c\") " pod="openshift-monitoring/telemeter-client-666ccd8c8f-k7gzj" Apr 23 17:55:38.913086 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.913034 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/e9b2584d-b2ad-4cda-af50-a4d6572658b0-monitoring-plugin-cert\") pod \"monitoring-plugin-7dccd58f55-hb655\" (UID: \"e9b2584d-b2ad-4cda-af50-a4d6572658b0\") " pod="openshift-monitoring/monitoring-plugin-7dccd58f55-hb655" Apr 23 17:55:38.913086 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.913062 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/6628d669-5605-4946-ad62-a1f2c5adce5c-federate-client-tls\") pod \"telemeter-client-666ccd8c8f-k7gzj\" (UID: \"6628d669-5605-4946-ad62-a1f2c5adce5c\") " pod="openshift-monitoring/telemeter-client-666ccd8c8f-k7gzj" Apr 23 17:55:38.913187 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.913095 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/6628d669-5605-4946-ad62-a1f2c5adce5c-telemeter-client-tls\") pod \"telemeter-client-666ccd8c8f-k7gzj\" (UID: \"6628d669-5605-4946-ad62-a1f2c5adce5c\") " pod="openshift-monitoring/telemeter-client-666ccd8c8f-k7gzj" Apr 23 17:55:38.913187 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.913143 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/6628d669-5605-4946-ad62-a1f2c5adce5c-secret-telemeter-client\") pod \"telemeter-client-666ccd8c8f-k7gzj\" (UID: \"6628d669-5605-4946-ad62-a1f2c5adce5c\") " pod="openshift-monitoring/telemeter-client-666ccd8c8f-k7gzj" Apr 23 17:55:38.913283 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.913184 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gsn5s\" (UniqueName: \"kubernetes.io/projected/6628d669-5605-4946-ad62-a1f2c5adce5c-kube-api-access-gsn5s\") pod \"telemeter-client-666ccd8c8f-k7gzj\" (UID: \"6628d669-5605-4946-ad62-a1f2c5adce5c\") " pod="openshift-monitoring/telemeter-client-666ccd8c8f-k7gzj" Apr 23 17:55:38.915678 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.913631 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6628d669-5605-4946-ad62-a1f2c5adce5c-serving-certs-ca-bundle\") pod \"telemeter-client-666ccd8c8f-k7gzj\" (UID: \"6628d669-5605-4946-ad62-a1f2c5adce5c\") " pod="openshift-monitoring/telemeter-client-666ccd8c8f-k7gzj" Apr 23 17:55:38.915678 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.915030 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6628d669-5605-4946-ad62-a1f2c5adce5c-telemeter-trusted-ca-bundle\") pod \"telemeter-client-666ccd8c8f-k7gzj\" (UID: \"6628d669-5605-4946-ad62-a1f2c5adce5c\") " pod="openshift-monitoring/telemeter-client-666ccd8c8f-k7gzj" Apr 23 17:55:38.915678 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.915604 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6628d669-5605-4946-ad62-a1f2c5adce5c-metrics-client-ca\") pod \"telemeter-client-666ccd8c8f-k7gzj\" (UID: \"6628d669-5605-4946-ad62-a1f2c5adce5c\") " pod="openshift-monitoring/telemeter-client-666ccd8c8f-k7gzj" Apr 23 17:55:38.915882 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.915785 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/e9b2584d-b2ad-4cda-af50-a4d6572658b0-monitoring-plugin-cert\") pod \"monitoring-plugin-7dccd58f55-hb655\" (UID: \"e9b2584d-b2ad-4cda-af50-a4d6572658b0\") " pod="openshift-monitoring/monitoring-plugin-7dccd58f55-hb655" Apr 23 17:55:38.916827 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.916783 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/6628d669-5605-4946-ad62-a1f2c5adce5c-federate-client-tls\") pod \"telemeter-client-666ccd8c8f-k7gzj\" (UID: \"6628d669-5605-4946-ad62-a1f2c5adce5c\") " pod="openshift-monitoring/telemeter-client-666ccd8c8f-k7gzj" Apr 23 17:55:38.917238 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.917202 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6628d669-5605-4946-ad62-a1f2c5adce5c-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-666ccd8c8f-k7gzj\" (UID: \"6628d669-5605-4946-ad62-a1f2c5adce5c\") " pod="openshift-monitoring/telemeter-client-666ccd8c8f-k7gzj" Apr 23 17:55:38.918367 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.917919 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/6628d669-5605-4946-ad62-a1f2c5adce5c-secret-telemeter-client\") pod \"telemeter-client-666ccd8c8f-k7gzj\" (UID: \"6628d669-5605-4946-ad62-a1f2c5adce5c\") " pod="openshift-monitoring/telemeter-client-666ccd8c8f-k7gzj" Apr 23 17:55:38.918367 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.918335 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/6628d669-5605-4946-ad62-a1f2c5adce5c-telemeter-client-tls\") pod \"telemeter-client-666ccd8c8f-k7gzj\" (UID: \"6628d669-5605-4946-ad62-a1f2c5adce5c\") " pod="openshift-monitoring/telemeter-client-666ccd8c8f-k7gzj" Apr 23 17:55:38.931329 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.931288 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsn5s\" (UniqueName: \"kubernetes.io/projected/6628d669-5605-4946-ad62-a1f2c5adce5c-kube-api-access-gsn5s\") pod \"telemeter-client-666ccd8c8f-k7gzj\" (UID: \"6628d669-5605-4946-ad62-a1f2c5adce5c\") " pod="openshift-monitoring/telemeter-client-666ccd8c8f-k7gzj" Apr 23 17:55:38.988371 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:38.988342 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-666ccd8c8f-k7gzj" Apr 23 17:55:39.054373 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:39.054312 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-n9t2k" event={"ID":"c0717f3c-f89c-4cad-a2c7-5e017bcc9292","Type":"ContainerStarted","Data":"2a6806e5121c919506c4c01c2b06833dd326e1ff236571ebbe0d0da769b0f2bd"} Apr 23 17:55:39.056318 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:39.056290 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-kms9t" event={"ID":"63127b59-6d72-4d18-85c3-8766abc25908","Type":"ContainerStarted","Data":"330b1436c221ba8c752343ead9aea55f0c0f611ade6f42e7db5050e28cf514fd"} Apr 23 17:55:39.059723 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:39.059674 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-574f7989c4-mftsr" event={"ID":"61cf4b7e-bc78-4b06-a4ed-bbbcd8031991","Type":"ContainerStarted","Data":"58a66a558e7dc32533885552862c46b7f3f6e6887d2a80c38a3b99589598c713"} Apr 23 17:55:39.064462 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:39.064436 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-th8sw" event={"ID":"d7a76e75-dee9-437f-afaf-611235bcda31","Type":"ContainerStarted","Data":"4d0abe3fa0a92ba9fa6a924b46b47350c58db01bafa6206503f8fec30201f9b6"} Apr 23 17:55:39.064558 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:39.064464 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-th8sw" event={"ID":"d7a76e75-dee9-437f-afaf-611235bcda31","Type":"ContainerStarted","Data":"336afa818e69c453bd20663a09b9bb01aa9572a7598dc276c146e183cb4a7c06"} Apr 23 17:55:39.064558 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:39.064478 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-th8sw" event={"ID":"d7a76e75-dee9-437f-afaf-611235bcda31","Type":"ContainerStarted","Data":"6326e12fe6318e68f42a4df848671a3cc58c392d370fd396ef4bb5d74da57dc3"} Apr 23 17:55:39.067239 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:39.067191 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-7dc86d8d7f-wg7qk" event={"ID":"ab4270c5-eb00-4f8a-8f0e-3386237c56e1","Type":"ContainerStarted","Data":"8df34432171daa3e0e388ed1b51446e1feded10f6c1ea23d3790ed8f77492ecb"} Apr 23 17:55:39.067239 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:39.067220 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-7dc86d8d7f-wg7qk" event={"ID":"ab4270c5-eb00-4f8a-8f0e-3386237c56e1","Type":"ContainerStarted","Data":"55ca4a6d60e7365a0935aae92130f9f48079e458eabfacb9eedebf7fc3d71d58"} Apr 23 17:55:39.067474 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:39.067438 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-7dc86d8d7f-wg7qk" Apr 23 17:55:39.122957 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:39.122557 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-7dccd58f55-hb655" Apr 23 17:55:39.162718 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:39.162431 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-7dc86d8d7f-wg7qk" podStartSLOduration=17.162408624 podStartE2EDuration="17.162408624s" podCreationTimestamp="2026-04-23 17:55:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:55:39.101786795 +0000 UTC m=+176.999240567" watchObservedRunningTime="2026-04-23 17:55:39.162408624 +0000 UTC m=+177.059862395" Apr 23 17:55:39.162718 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:39.162663 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-666ccd8c8f-k7gzj"] Apr 23 17:55:39.167006 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:55:39.166933 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6628d669_5605_4946_ad62_a1f2c5adce5c.slice/crio-0d82e1f3eceee32701131f8b50dfab8dbcc421f56da8cfa74b1af8123d9a330f WatchSource:0}: Error finding container 0d82e1f3eceee32701131f8b50dfab8dbcc421f56da8cfa74b1af8123d9a330f: Status 404 returned error can't find the container with id 0d82e1f3eceee32701131f8b50dfab8dbcc421f56da8cfa74b1af8123d9a330f Apr 23 17:55:39.271372 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:39.271320 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-7dccd58f55-hb655"] Apr 23 17:55:39.276212 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:55:39.276171 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9b2584d_b2ad_4cda_af50_a4d6572658b0.slice/crio-31662330de427fee94f5016f773d514948945bab2207e8496ea16997a69b82c2 WatchSource:0}: Error finding container 31662330de427fee94f5016f773d514948945bab2207e8496ea16997a69b82c2: Status 404 returned error can't find the container with id 31662330de427fee94f5016f773d514948945bab2207e8496ea16997a69b82c2 Apr 23 17:55:40.071375 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:40.071330 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-666ccd8c8f-k7gzj" event={"ID":"6628d669-5605-4946-ad62-a1f2c5adce5c","Type":"ContainerStarted","Data":"0d82e1f3eceee32701131f8b50dfab8dbcc421f56da8cfa74b1af8123d9a330f"} Apr 23 17:55:40.072845 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:40.072811 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-7dccd58f55-hb655" event={"ID":"e9b2584d-b2ad-4cda-af50-a4d6572658b0","Type":"ContainerStarted","Data":"31662330de427fee94f5016f773d514948945bab2207e8496ea16997a69b82c2"} Apr 23 17:55:42.406524 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.406490 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-85965948fd-64qrf"] Apr 23 17:55:42.409669 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.409652 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-85965948fd-64qrf" Apr 23 17:55:42.412555 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.412323 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Apr 23 17:55:42.412555 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.412340 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-52nww\"" Apr 23 17:55:42.412555 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.412348 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Apr 23 17:55:42.412555 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.412383 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Apr 23 17:55:42.412555 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.412436 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Apr 23 17:55:42.412555 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.412329 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Apr 23 17:55:42.413474 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.413426 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Apr 23 17:55:42.413743 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.413729 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Apr 23 17:55:42.418773 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.418752 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Apr 23 17:55:42.420081 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.420061 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-85965948fd-64qrf"] Apr 23 17:55:42.548270 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.548248 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/83c905f3-3f6e-49ad-bdd8-3855069141c5-console-oauth-config\") pod \"console-85965948fd-64qrf\" (UID: \"83c905f3-3f6e-49ad-bdd8-3855069141c5\") " pod="openshift-console/console-85965948fd-64qrf" Apr 23 17:55:42.548389 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.548280 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83c905f3-3f6e-49ad-bdd8-3855069141c5-service-ca\") pod \"console-85965948fd-64qrf\" (UID: \"83c905f3-3f6e-49ad-bdd8-3855069141c5\") " pod="openshift-console/console-85965948fd-64qrf" Apr 23 17:55:42.548389 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.548300 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/83c905f3-3f6e-49ad-bdd8-3855069141c5-console-serving-cert\") pod \"console-85965948fd-64qrf\" (UID: \"83c905f3-3f6e-49ad-bdd8-3855069141c5\") " pod="openshift-console/console-85965948fd-64qrf" Apr 23 17:55:42.548389 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.548369 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83c905f3-3f6e-49ad-bdd8-3855069141c5-trusted-ca-bundle\") pod \"console-85965948fd-64qrf\" (UID: \"83c905f3-3f6e-49ad-bdd8-3855069141c5\") " pod="openshift-console/console-85965948fd-64qrf" Apr 23 17:55:42.548498 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.548408 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/83c905f3-3f6e-49ad-bdd8-3855069141c5-console-config\") pod \"console-85965948fd-64qrf\" (UID: \"83c905f3-3f6e-49ad-bdd8-3855069141c5\") " pod="openshift-console/console-85965948fd-64qrf" Apr 23 17:55:42.548498 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.548426 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx655\" (UniqueName: \"kubernetes.io/projected/83c905f3-3f6e-49ad-bdd8-3855069141c5-kube-api-access-rx655\") pod \"console-85965948fd-64qrf\" (UID: \"83c905f3-3f6e-49ad-bdd8-3855069141c5\") " pod="openshift-console/console-85965948fd-64qrf" Apr 23 17:55:42.548498 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.548460 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/83c905f3-3f6e-49ad-bdd8-3855069141c5-oauth-serving-cert\") pod \"console-85965948fd-64qrf\" (UID: \"83c905f3-3f6e-49ad-bdd8-3855069141c5\") " pod="openshift-console/console-85965948fd-64qrf" Apr 23 17:55:42.649376 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.649175 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83c905f3-3f6e-49ad-bdd8-3855069141c5-trusted-ca-bundle\") pod \"console-85965948fd-64qrf\" (UID: \"83c905f3-3f6e-49ad-bdd8-3855069141c5\") " pod="openshift-console/console-85965948fd-64qrf" Apr 23 17:55:42.649376 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.649256 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/83c905f3-3f6e-49ad-bdd8-3855069141c5-console-config\") pod \"console-85965948fd-64qrf\" (UID: \"83c905f3-3f6e-49ad-bdd8-3855069141c5\") " pod="openshift-console/console-85965948fd-64qrf" Apr 23 17:55:42.649376 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.649305 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rx655\" (UniqueName: \"kubernetes.io/projected/83c905f3-3f6e-49ad-bdd8-3855069141c5-kube-api-access-rx655\") pod \"console-85965948fd-64qrf\" (UID: \"83c905f3-3f6e-49ad-bdd8-3855069141c5\") " pod="openshift-console/console-85965948fd-64qrf" Apr 23 17:55:42.650242 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.649757 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/83c905f3-3f6e-49ad-bdd8-3855069141c5-oauth-serving-cert\") pod \"console-85965948fd-64qrf\" (UID: \"83c905f3-3f6e-49ad-bdd8-3855069141c5\") " pod="openshift-console/console-85965948fd-64qrf" Apr 23 17:55:42.650242 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.649820 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/83c905f3-3f6e-49ad-bdd8-3855069141c5-console-oauth-config\") pod \"console-85965948fd-64qrf\" (UID: \"83c905f3-3f6e-49ad-bdd8-3855069141c5\") " pod="openshift-console/console-85965948fd-64qrf" Apr 23 17:55:42.650242 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.649888 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83c905f3-3f6e-49ad-bdd8-3855069141c5-service-ca\") pod \"console-85965948fd-64qrf\" (UID: \"83c905f3-3f6e-49ad-bdd8-3855069141c5\") " pod="openshift-console/console-85965948fd-64qrf" Apr 23 17:55:42.650242 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.649927 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/83c905f3-3f6e-49ad-bdd8-3855069141c5-console-serving-cert\") pod \"console-85965948fd-64qrf\" (UID: \"83c905f3-3f6e-49ad-bdd8-3855069141c5\") " pod="openshift-console/console-85965948fd-64qrf" Apr 23 17:55:42.652487 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.652471 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Apr 23 17:55:42.652765 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.652749 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Apr 23 17:55:42.654459 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.654440 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Apr 23 17:55:42.654554 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.654541 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Apr 23 17:55:42.654704 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.654680 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Apr 23 17:55:42.658995 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.658934 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Apr 23 17:55:42.660746 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.660721 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/83c905f3-3f6e-49ad-bdd8-3855069141c5-console-config\") pod \"console-85965948fd-64qrf\" (UID: \"83c905f3-3f6e-49ad-bdd8-3855069141c5\") " pod="openshift-console/console-85965948fd-64qrf" Apr 23 17:55:42.660920 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.660875 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83c905f3-3f6e-49ad-bdd8-3855069141c5-trusted-ca-bundle\") pod \"console-85965948fd-64qrf\" (UID: \"83c905f3-3f6e-49ad-bdd8-3855069141c5\") " pod="openshift-console/console-85965948fd-64qrf" Apr 23 17:55:42.661348 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.661330 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/83c905f3-3f6e-49ad-bdd8-3855069141c5-oauth-serving-cert\") pod \"console-85965948fd-64qrf\" (UID: \"83c905f3-3f6e-49ad-bdd8-3855069141c5\") " pod="openshift-console/console-85965948fd-64qrf" Apr 23 17:55:42.662235 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.662136 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83c905f3-3f6e-49ad-bdd8-3855069141c5-service-ca\") pod \"console-85965948fd-64qrf\" (UID: \"83c905f3-3f6e-49ad-bdd8-3855069141c5\") " pod="openshift-console/console-85965948fd-64qrf" Apr 23 17:55:42.662980 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.662949 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/83c905f3-3f6e-49ad-bdd8-3855069141c5-console-oauth-config\") pod \"console-85965948fd-64qrf\" (UID: \"83c905f3-3f6e-49ad-bdd8-3855069141c5\") " pod="openshift-console/console-85965948fd-64qrf" Apr 23 17:55:42.664330 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.664098 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/83c905f3-3f6e-49ad-bdd8-3855069141c5-console-serving-cert\") pod \"console-85965948fd-64qrf\" (UID: \"83c905f3-3f6e-49ad-bdd8-3855069141c5\") " pod="openshift-console/console-85965948fd-64qrf" Apr 23 17:55:42.665385 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.665270 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Apr 23 17:55:42.676007 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.675985 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Apr 23 17:55:42.686110 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.686089 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx655\" (UniqueName: \"kubernetes.io/projected/83c905f3-3f6e-49ad-bdd8-3855069141c5-kube-api-access-rx655\") pod \"console-85965948fd-64qrf\" (UID: \"83c905f3-3f6e-49ad-bdd8-3855069141c5\") " pod="openshift-console/console-85965948fd-64qrf" Apr 23 17:55:42.723143 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.723120 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-52nww\"" Apr 23 17:55:42.731231 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.731213 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-85965948fd-64qrf" Apr 23 17:55:42.865360 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:42.865332 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-85965948fd-64qrf"] Apr 23 17:55:42.868845 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:55:42.868815 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83c905f3_3f6e_49ad_bdd8_3855069141c5.slice/crio-febf3637ce9952692ac4dfb6c3081ded626d1b5998581f745faed17b2187f9dc WatchSource:0}: Error finding container febf3637ce9952692ac4dfb6c3081ded626d1b5998581f745faed17b2187f9dc: Status 404 returned error can't find the container with id febf3637ce9952692ac4dfb6c3081ded626d1b5998581f745faed17b2187f9dc Apr 23 17:55:43.086859 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:43.086814 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-n9t2k" event={"ID":"c0717f3c-f89c-4cad-a2c7-5e017bcc9292","Type":"ContainerStarted","Data":"66d320f69f34c33ab57949a8af789344d593347421fff6a5f2d186efb6e58bb8"} Apr 23 17:55:43.086859 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:43.086856 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-n9t2k" event={"ID":"c0717f3c-f89c-4cad-a2c7-5e017bcc9292","Type":"ContainerStarted","Data":"4355d0c8ea6954905ea2423eb53d65b7459050c3dd1c3b51e6162150bd204262"} Apr 23 17:55:43.087100 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:43.086950 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-n9t2k" Apr 23 17:55:43.088391 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:43.088361 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-kms9t" event={"ID":"63127b59-6d72-4d18-85c3-8766abc25908","Type":"ContainerStarted","Data":"114930fdb2c5c587c6cd7e974fdd95df90c9c5e07cf6c547a019d70cb11946b7"} Apr 23 17:55:43.089883 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:43.089853 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-666ccd8c8f-k7gzj" event={"ID":"6628d669-5605-4946-ad62-a1f2c5adce5c","Type":"ContainerStarted","Data":"a196d60faa1dff46c10ac9df432553a6ac1c2343883a5cfe51de67a71bdcaf72"} Apr 23 17:55:43.091291 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:43.091270 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-7dccd58f55-hb655" event={"ID":"e9b2584d-b2ad-4cda-af50-a4d6572658b0","Type":"ContainerStarted","Data":"51f80f4d1bdefd697487a84c3569712ffbf9723e2e22767dfbb40ace081f5da1"} Apr 23 17:55:43.091499 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:43.091475 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-monitoring/monitoring-plugin-7dccd58f55-hb655" Apr 23 17:55:43.092868 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:43.092842 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-574f7989c4-mftsr" event={"ID":"61cf4b7e-bc78-4b06-a4ed-bbbcd8031991","Type":"ContainerStarted","Data":"b7390c3c540fc5d72065a10f9cb7074e5d0ffed4a1dc2cdfd8e89e4c4ab6262e"} Apr 23 17:55:43.095276 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:43.095252 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-th8sw" event={"ID":"d7a76e75-dee9-437f-afaf-611235bcda31","Type":"ContainerStarted","Data":"6e56a05cd2bca2c26284e84ab1426cda652d630ea893488bc5a5e7f3c1ab323e"} Apr 23 17:55:43.096592 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:43.096553 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-85965948fd-64qrf" event={"ID":"83c905f3-3f6e-49ad-bdd8-3855069141c5","Type":"ContainerStarted","Data":"febf3637ce9952692ac4dfb6c3081ded626d1b5998581f745faed17b2187f9dc"} Apr 23 17:55:43.097362 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:43.097339 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-7dccd58f55-hb655" Apr 23 17:55:43.129961 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:43.129916 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-n9t2k" podStartSLOduration=2.181888954 podStartE2EDuration="6.129906017s" podCreationTimestamp="2026-04-23 17:55:37 +0000 UTC" firstStartedPulling="2026-04-23 17:55:38.152629856 +0000 UTC m=+176.050083604" lastFinishedPulling="2026-04-23 17:55:42.100646904 +0000 UTC m=+179.998100667" observedRunningTime="2026-04-23 17:55:43.128492038 +0000 UTC m=+181.025945805" watchObservedRunningTime="2026-04-23 17:55:43.129906017 +0000 UTC m=+181.027359786" Apr 23 17:55:43.157998 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:43.157963 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-7dccd58f55-hb655" podStartSLOduration=2.333126402 podStartE2EDuration="5.157951487s" podCreationTimestamp="2026-04-23 17:55:38 +0000 UTC" firstStartedPulling="2026-04-23 17:55:39.283469802 +0000 UTC m=+177.180923570" lastFinishedPulling="2026-04-23 17:55:42.108294893 +0000 UTC m=+180.005748655" observedRunningTime="2026-04-23 17:55:43.157064284 +0000 UTC m=+181.054518053" watchObservedRunningTime="2026-04-23 17:55:43.157951487 +0000 UTC m=+181.055405256" Apr 23 17:55:43.181306 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:43.181260 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-kms9t" podStartSLOduration=2.211640227 podStartE2EDuration="6.181244939s" podCreationTimestamp="2026-04-23 17:55:37 +0000 UTC" firstStartedPulling="2026-04-23 17:55:38.131471755 +0000 UTC m=+176.028925511" lastFinishedPulling="2026-04-23 17:55:42.101076472 +0000 UTC m=+179.998530223" observedRunningTime="2026-04-23 17:55:43.181061769 +0000 UTC m=+181.078515552" watchObservedRunningTime="2026-04-23 17:55:43.181244939 +0000 UTC m=+181.078698710" Apr 23 17:55:43.222652 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:43.222609 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-runtime-extractor-th8sw" podStartSLOduration=2.372388402 podStartE2EDuration="6.222595177s" podCreationTimestamp="2026-04-23 17:55:37 +0000 UTC" firstStartedPulling="2026-04-23 17:55:38.25039434 +0000 UTC m=+176.147848100" lastFinishedPulling="2026-04-23 17:55:42.100601111 +0000 UTC m=+179.998054875" observedRunningTime="2026-04-23 17:55:43.222259859 +0000 UTC m=+181.119713630" watchObservedRunningTime="2026-04-23 17:55:43.222595177 +0000 UTC m=+181.120048946" Apr 23 17:55:43.250496 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:43.250442 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-574f7989c4-mftsr" podStartSLOduration=2.405196822 podStartE2EDuration="6.250423775s" podCreationTimestamp="2026-04-23 17:55:37 +0000 UTC" firstStartedPulling="2026-04-23 17:55:38.255395242 +0000 UTC m=+176.152848990" lastFinishedPulling="2026-04-23 17:55:42.100622181 +0000 UTC m=+179.998075943" observedRunningTime="2026-04-23 17:55:43.250019855 +0000 UTC m=+181.147473648" watchObservedRunningTime="2026-04-23 17:55:43.250423775 +0000 UTC m=+181.147877547" Apr 23 17:55:44.102431 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:44.102245 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-666ccd8c8f-k7gzj" event={"ID":"6628d669-5605-4946-ad62-a1f2c5adce5c","Type":"ContainerStarted","Data":"481ad4f873e182c8ef473177e2f623a1c6f0ff2e88edc4560135bea48f079636"} Apr 23 17:55:44.102431 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:44.102295 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-666ccd8c8f-k7gzj" event={"ID":"6628d669-5605-4946-ad62-a1f2c5adce5c","Type":"ContainerStarted","Data":"f462faf6bdef7b170b7b187258180b23ea396db88927eff8e635358b711f2971"} Apr 23 17:55:44.131905 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:44.131841 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-666ccd8c8f-k7gzj" podStartSLOduration=2.142030145 podStartE2EDuration="6.131822295s" podCreationTimestamp="2026-04-23 17:55:38 +0000 UTC" firstStartedPulling="2026-04-23 17:55:39.16962311 +0000 UTC m=+177.067076858" lastFinishedPulling="2026-04-23 17:55:43.159415246 +0000 UTC m=+181.056869008" observedRunningTime="2026-04-23 17:55:44.129484558 +0000 UTC m=+182.026938352" watchObservedRunningTime="2026-04-23 17:55:44.131822295 +0000 UTC m=+182.029276066" Apr 23 17:55:45.065931 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:45.065888 2574 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-console/console-85965948fd-64qrf"] Apr 23 17:55:45.090732 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:45.090699 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-848b8cf89c-v72pl"] Apr 23 17:55:45.100530 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:45.100502 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-848b8cf89c-v72pl" Apr 23 17:55:45.106478 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:45.106454 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-848b8cf89c-v72pl"] Apr 23 17:55:45.172780 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:45.172747 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e51e2ea4-6814-44c7-8e06-9ac9f446025a-console-config\") pod \"console-848b8cf89c-v72pl\" (UID: \"e51e2ea4-6814-44c7-8e06-9ac9f446025a\") " pod="openshift-console/console-848b8cf89c-v72pl" Apr 23 17:55:45.172967 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:45.172788 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e51e2ea4-6814-44c7-8e06-9ac9f446025a-console-serving-cert\") pod \"console-848b8cf89c-v72pl\" (UID: \"e51e2ea4-6814-44c7-8e06-9ac9f446025a\") " pod="openshift-console/console-848b8cf89c-v72pl" Apr 23 17:55:45.172967 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:45.172932 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e51e2ea4-6814-44c7-8e06-9ac9f446025a-console-oauth-config\") pod \"console-848b8cf89c-v72pl\" (UID: \"e51e2ea4-6814-44c7-8e06-9ac9f446025a\") " pod="openshift-console/console-848b8cf89c-v72pl" Apr 23 17:55:45.173278 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:45.173230 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e51e2ea4-6814-44c7-8e06-9ac9f446025a-oauth-serving-cert\") pod \"console-848b8cf89c-v72pl\" (UID: \"e51e2ea4-6814-44c7-8e06-9ac9f446025a\") " pod="openshift-console/console-848b8cf89c-v72pl" Apr 23 17:55:45.173490 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:45.173451 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e51e2ea4-6814-44c7-8e06-9ac9f446025a-service-ca\") pod \"console-848b8cf89c-v72pl\" (UID: \"e51e2ea4-6814-44c7-8e06-9ac9f446025a\") " pod="openshift-console/console-848b8cf89c-v72pl" Apr 23 17:55:45.173694 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:45.173534 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e51e2ea4-6814-44c7-8e06-9ac9f446025a-trusted-ca-bundle\") pod \"console-848b8cf89c-v72pl\" (UID: \"e51e2ea4-6814-44c7-8e06-9ac9f446025a\") " pod="openshift-console/console-848b8cf89c-v72pl" Apr 23 17:55:45.173694 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:45.173623 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27jk7\" (UniqueName: \"kubernetes.io/projected/e51e2ea4-6814-44c7-8e06-9ac9f446025a-kube-api-access-27jk7\") pod \"console-848b8cf89c-v72pl\" (UID: \"e51e2ea4-6814-44c7-8e06-9ac9f446025a\") " pod="openshift-console/console-848b8cf89c-v72pl" Apr 23 17:55:45.273932 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:45.273899 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e51e2ea4-6814-44c7-8e06-9ac9f446025a-console-config\") pod \"console-848b8cf89c-v72pl\" (UID: \"e51e2ea4-6814-44c7-8e06-9ac9f446025a\") " pod="openshift-console/console-848b8cf89c-v72pl" Apr 23 17:55:45.273932 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:45.273935 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e51e2ea4-6814-44c7-8e06-9ac9f446025a-console-serving-cert\") pod \"console-848b8cf89c-v72pl\" (UID: \"e51e2ea4-6814-44c7-8e06-9ac9f446025a\") " pod="openshift-console/console-848b8cf89c-v72pl" Apr 23 17:55:45.274172 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:45.273970 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e51e2ea4-6814-44c7-8e06-9ac9f446025a-console-oauth-config\") pod \"console-848b8cf89c-v72pl\" (UID: \"e51e2ea4-6814-44c7-8e06-9ac9f446025a\") " pod="openshift-console/console-848b8cf89c-v72pl" Apr 23 17:55:45.274172 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:45.274010 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e51e2ea4-6814-44c7-8e06-9ac9f446025a-oauth-serving-cert\") pod \"console-848b8cf89c-v72pl\" (UID: \"e51e2ea4-6814-44c7-8e06-9ac9f446025a\") " pod="openshift-console/console-848b8cf89c-v72pl" Apr 23 17:55:45.274172 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:45.274046 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e51e2ea4-6814-44c7-8e06-9ac9f446025a-service-ca\") pod \"console-848b8cf89c-v72pl\" (UID: \"e51e2ea4-6814-44c7-8e06-9ac9f446025a\") " pod="openshift-console/console-848b8cf89c-v72pl" Apr 23 17:55:45.274172 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:45.274070 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e51e2ea4-6814-44c7-8e06-9ac9f446025a-trusted-ca-bundle\") pod \"console-848b8cf89c-v72pl\" (UID: \"e51e2ea4-6814-44c7-8e06-9ac9f446025a\") " pod="openshift-console/console-848b8cf89c-v72pl" Apr 23 17:55:45.274172 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:45.274102 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-27jk7\" (UniqueName: \"kubernetes.io/projected/e51e2ea4-6814-44c7-8e06-9ac9f446025a-kube-api-access-27jk7\") pod \"console-848b8cf89c-v72pl\" (UID: \"e51e2ea4-6814-44c7-8e06-9ac9f446025a\") " pod="openshift-console/console-848b8cf89c-v72pl" Apr 23 17:55:45.274782 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:45.274733 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e51e2ea4-6814-44c7-8e06-9ac9f446025a-console-config\") pod \"console-848b8cf89c-v72pl\" (UID: \"e51e2ea4-6814-44c7-8e06-9ac9f446025a\") " pod="openshift-console/console-848b8cf89c-v72pl" Apr 23 17:55:45.274911 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:45.274815 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e51e2ea4-6814-44c7-8e06-9ac9f446025a-oauth-serving-cert\") pod \"console-848b8cf89c-v72pl\" (UID: \"e51e2ea4-6814-44c7-8e06-9ac9f446025a\") " pod="openshift-console/console-848b8cf89c-v72pl" Apr 23 17:55:45.274911 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:45.274855 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e51e2ea4-6814-44c7-8e06-9ac9f446025a-service-ca\") pod \"console-848b8cf89c-v72pl\" (UID: \"e51e2ea4-6814-44c7-8e06-9ac9f446025a\") " pod="openshift-console/console-848b8cf89c-v72pl" Apr 23 17:55:45.275075 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:45.275048 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e51e2ea4-6814-44c7-8e06-9ac9f446025a-trusted-ca-bundle\") pod \"console-848b8cf89c-v72pl\" (UID: \"e51e2ea4-6814-44c7-8e06-9ac9f446025a\") " pod="openshift-console/console-848b8cf89c-v72pl" Apr 23 17:55:45.276956 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:45.276922 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e51e2ea4-6814-44c7-8e06-9ac9f446025a-console-oauth-config\") pod \"console-848b8cf89c-v72pl\" (UID: \"e51e2ea4-6814-44c7-8e06-9ac9f446025a\") " pod="openshift-console/console-848b8cf89c-v72pl" Apr 23 17:55:45.277060 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:45.277032 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e51e2ea4-6814-44c7-8e06-9ac9f446025a-console-serving-cert\") pod \"console-848b8cf89c-v72pl\" (UID: \"e51e2ea4-6814-44c7-8e06-9ac9f446025a\") " pod="openshift-console/console-848b8cf89c-v72pl" Apr 23 17:55:45.284318 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:45.284299 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-27jk7\" (UniqueName: \"kubernetes.io/projected/e51e2ea4-6814-44c7-8e06-9ac9f446025a-kube-api-access-27jk7\") pod \"console-848b8cf89c-v72pl\" (UID: \"e51e2ea4-6814-44c7-8e06-9ac9f446025a\") " pod="openshift-console/console-848b8cf89c-v72pl" Apr 23 17:55:45.412288 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:45.412269 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-848b8cf89c-v72pl" Apr 23 17:55:45.542238 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:45.542210 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-848b8cf89c-v72pl"] Apr 23 17:55:45.545176 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:55:45.545153 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode51e2ea4_6814_44c7_8e06_9ac9f446025a.slice/crio-dcde9fb861f5137d2a899b9d87fe823de1e853071ddf379143cd62a2d963c791 WatchSource:0}: Error finding container dcde9fb861f5137d2a899b9d87fe823de1e853071ddf379143cd62a2d963c791: Status 404 returned error can't find the container with id dcde9fb861f5137d2a899b9d87fe823de1e853071ddf379143cd62a2d963c791 Apr 23 17:55:46.114970 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:46.114926 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-848b8cf89c-v72pl" event={"ID":"e51e2ea4-6814-44c7-8e06-9ac9f446025a","Type":"ContainerStarted","Data":"057c6a54767edabeacda2821a05bfd0b845ba6a8994d8761508208daf0119f2e"} Apr 23 17:55:46.114970 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:46.114975 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-848b8cf89c-v72pl" event={"ID":"e51e2ea4-6814-44c7-8e06-9ac9f446025a","Type":"ContainerStarted","Data":"dcde9fb861f5137d2a899b9d87fe823de1e853071ddf379143cd62a2d963c791"} Apr 23 17:55:46.116500 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:46.116473 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-85965948fd-64qrf" event={"ID":"83c905f3-3f6e-49ad-bdd8-3855069141c5","Type":"ContainerStarted","Data":"447daf6134034b1a72097f99e5f04e60b0cc46d49c329cc3a758bb28f026ef8e"} Apr 23 17:55:46.138471 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:46.138402 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-848b8cf89c-v72pl" podStartSLOduration=1.138388997 podStartE2EDuration="1.138388997s" podCreationTimestamp="2026-04-23 17:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:55:46.137444504 +0000 UTC m=+184.034898276" watchObservedRunningTime="2026-04-23 17:55:46.138388997 +0000 UTC m=+184.035842767" Apr 23 17:55:46.157989 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:46.157944 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-85965948fd-64qrf" podStartSLOduration=1.654160742 podStartE2EDuration="4.157934198s" podCreationTimestamp="2026-04-23 17:55:42 +0000 UTC" firstStartedPulling="2026-04-23 17:55:42.872066641 +0000 UTC m=+180.769520389" lastFinishedPulling="2026-04-23 17:55:45.375840097 +0000 UTC m=+183.273293845" observedRunningTime="2026-04-23 17:55:46.156876581 +0000 UTC m=+184.054330353" watchObservedRunningTime="2026-04-23 17:55:46.157934198 +0000 UTC m=+184.055387967" Apr 23 17:55:52.732001 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:52.731972 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-85965948fd-64qrf" Apr 23 17:55:53.105275 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:53.105202 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-n9t2k" Apr 23 17:55:55.412352 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:55.412320 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-848b8cf89c-v72pl" Apr 23 17:55:55.412775 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:55.412393 2574 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-848b8cf89c-v72pl" Apr 23 17:55:55.417409 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:55.417386 2574 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-848b8cf89c-v72pl" Apr 23 17:55:56.021424 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:56.021389 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2kfrf" Apr 23 17:55:56.147099 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:56.147073 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-848b8cf89c-v72pl" Apr 23 17:55:58.127303 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:58.127259 2574 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-574f7989c4-mftsr" Apr 23 17:55:58.127303 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:55:58.127299 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-monitoring/metrics-server-574f7989c4-mftsr" Apr 23 17:56:00.077522 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:00.077485 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-7dc86d8d7f-wg7qk" Apr 23 17:56:04.507990 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:04.507954 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb-original-pull-secret\") pod \"global-pull-secret-syncer-p5ndb\" (UID: \"f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb\") " pod="kube-system/global-pull-secret-syncer-p5ndb" Apr 23 17:56:04.508404 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:04.508000 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5af1b6bf-71a6-4257-9a8a-b48c1c14659c-metrics-certs\") pod \"network-metrics-daemon-45ztw\" (UID: \"5af1b6bf-71a6-4257-9a8a-b48c1c14659c\") " pod="openshift-multus/network-metrics-daemon-45ztw" Apr 23 17:56:04.510816 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:04.510795 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 23 17:56:04.511837 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:04.511821 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"original-pull-secret\"" Apr 23 17:56:04.521161 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:04.521130 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5af1b6bf-71a6-4257-9a8a-b48c1c14659c-metrics-certs\") pod \"network-metrics-daemon-45ztw\" (UID: \"5af1b6bf-71a6-4257-9a8a-b48c1c14659c\") " pod="openshift-multus/network-metrics-daemon-45ztw" Apr 23 17:56:04.521268 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:04.521140 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb-original-pull-secret\") pod \"global-pull-secret-syncer-p5ndb\" (UID: \"f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb\") " pod="kube-system/global-pull-secret-syncer-p5ndb" Apr 23 17:56:04.608996 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:04.608968 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k7gcb\" (UniqueName: \"kubernetes.io/projected/35ee14f0-f248-4da4-a578-5901f2cd8f5f-kube-api-access-k7gcb\") pod \"network-check-target-88zs6\" (UID: \"35ee14f0-f248-4da4-a578-5901f2cd8f5f\") " pod="openshift-network-diagnostics/network-check-target-88zs6" Apr 23 17:56:04.611748 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:04.611731 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 23 17:56:04.625605 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:04.625588 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 23 17:56:04.632200 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:04.632184 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7gcb\" (UniqueName: \"kubernetes.io/projected/35ee14f0-f248-4da4-a578-5901f2cd8f5f-kube-api-access-k7gcb\") pod \"network-check-target-88zs6\" (UID: \"35ee14f0-f248-4da4-a578-5901f2cd8f5f\") " pod="openshift-network-diagnostics/network-check-target-88zs6" Apr 23 17:56:04.757113 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:04.757093 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-5487j\"" Apr 23 17:56:04.762451 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:04.762414 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-p5ndb" Apr 23 17:56:04.764987 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:04.764970 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45ztw" Apr 23 17:56:04.771222 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:04.771194 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-gszvz\"" Apr 23 17:56:04.778810 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:04.778792 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-88zs6" Apr 23 17:56:04.920416 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:04.920390 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-45ztw"] Apr 23 17:56:04.923221 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:56:04.923172 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5af1b6bf_71a6_4257_9a8a_b48c1c14659c.slice/crio-76cf776796e561321c1893c30e2f6e7956005e170eb4bfdfdb70deb6e349b79f WatchSource:0}: Error finding container 76cf776796e561321c1893c30e2f6e7956005e170eb4bfdfdb70deb6e349b79f: Status 404 returned error can't find the container with id 76cf776796e561321c1893c30e2f6e7956005e170eb4bfdfdb70deb6e349b79f Apr 23 17:56:04.924515 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:04.924495 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-p5ndb"] Apr 23 17:56:04.927168 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:56:04.927145 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5d25d8b_ddd9_4d17_ad8d_3eb35aadb1bb.slice/crio-8edf928cf88a6ac5350eb7a7ff401ffb9308594233d00a1cbe93b56a6ba50ba2 WatchSource:0}: Error finding container 8edf928cf88a6ac5350eb7a7ff401ffb9308594233d00a1cbe93b56a6ba50ba2: Status 404 returned error can't find the container with id 8edf928cf88a6ac5350eb7a7ff401ffb9308594233d00a1cbe93b56a6ba50ba2 Apr 23 17:56:04.949870 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:04.949793 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-88zs6"] Apr 23 17:56:04.952046 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:56:04.952026 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35ee14f0_f248_4da4_a578_5901f2cd8f5f.slice/crio-c38cb46b6268307943304932878e5046c305894ec908c69c19211af8a642744d WatchSource:0}: Error finding container c38cb46b6268307943304932878e5046c305894ec908c69c19211af8a642744d: Status 404 returned error can't find the container with id c38cb46b6268307943304932878e5046c305894ec908c69c19211af8a642744d Apr 23 17:56:05.172067 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:05.172005 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-88zs6" event={"ID":"35ee14f0-f248-4da4-a578-5901f2cd8f5f","Type":"ContainerStarted","Data":"c38cb46b6268307943304932878e5046c305894ec908c69c19211af8a642744d"} Apr 23 17:56:05.172955 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:05.172927 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-p5ndb" event={"ID":"f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb","Type":"ContainerStarted","Data":"8edf928cf88a6ac5350eb7a7ff401ffb9308594233d00a1cbe93b56a6ba50ba2"} Apr 23 17:56:05.173818 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:05.173789 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-45ztw" event={"ID":"5af1b6bf-71a6-4257-9a8a-b48c1c14659c","Type":"ContainerStarted","Data":"76cf776796e561321c1893c30e2f6e7956005e170eb4bfdfdb70deb6e349b79f"} Apr 23 17:56:07.183000 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:07.182928 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-45ztw" event={"ID":"5af1b6bf-71a6-4257-9a8a-b48c1c14659c","Type":"ContainerStarted","Data":"9064ef70431db7a0da027e90c99a198fb515f26743c3c7fcf5fe1efbcb734bc4"} Apr 23 17:56:07.183000 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:07.182974 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-45ztw" event={"ID":"5af1b6bf-71a6-4257-9a8a-b48c1c14659c","Type":"ContainerStarted","Data":"1ddd26acea92ffe5457e437ef2b43ee246f7e3b7d798ce97db5dafa2ef253aa8"} Apr 23 17:56:07.204202 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:07.204139 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-45ztw" podStartSLOduration=120.016132511 podStartE2EDuration="2m1.204122685s" podCreationTimestamp="2026-04-23 17:54:06 +0000 UTC" firstStartedPulling="2026-04-23 17:56:04.925381493 +0000 UTC m=+202.822835256" lastFinishedPulling="2026-04-23 17:56:06.113371669 +0000 UTC m=+204.010825430" observedRunningTime="2026-04-23 17:56:07.20319293 +0000 UTC m=+205.100646702" watchObservedRunningTime="2026-04-23 17:56:07.204122685 +0000 UTC m=+205.101576458" Apr 23 17:56:10.193412 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:10.193376 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-88zs6" event={"ID":"35ee14f0-f248-4da4-a578-5901f2cd8f5f","Type":"ContainerStarted","Data":"5b9f194508a76e3c500a9e30cf91a5d661f2544ceacb6be70d5c44d8183059b4"} Apr 23 17:56:10.193878 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:10.193486 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-88zs6" Apr 23 17:56:10.194618 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:10.194596 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-p5ndb" event={"ID":"f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb","Type":"ContainerStarted","Data":"42bbe44d8615e06209d4fee5db72cc66e01345873ed7b92aa9836c7f68129ab8"} Apr 23 17:56:10.217168 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:10.217128 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-target-88zs6" podStartSLOduration=118.928148811 podStartE2EDuration="2m3.217119616s" podCreationTimestamp="2026-04-23 17:54:07 +0000 UTC" firstStartedPulling="2026-04-23 17:56:04.953540645 +0000 UTC m=+202.850994394" lastFinishedPulling="2026-04-23 17:56:09.24251145 +0000 UTC m=+207.139965199" observedRunningTime="2026-04-23 17:56:10.215996622 +0000 UTC m=+208.113450392" watchObservedRunningTime="2026-04-23 17:56:10.217119616 +0000 UTC m=+208.114573385" Apr 23 17:56:10.231940 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:10.231906 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/global-pull-secret-syncer-p5ndb" podStartSLOduration=119.913187727 podStartE2EDuration="2m4.231895023s" podCreationTimestamp="2026-04-23 17:54:06 +0000 UTC" firstStartedPulling="2026-04-23 17:56:04.929177068 +0000 UTC m=+202.826630817" lastFinishedPulling="2026-04-23 17:56:09.247884365 +0000 UTC m=+207.145338113" observedRunningTime="2026-04-23 17:56:10.231537278 +0000 UTC m=+208.128991049" watchObservedRunningTime="2026-04-23 17:56:10.231895023 +0000 UTC m=+208.129348792" Apr 23 17:56:11.135505 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:11.135457 2574 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-console/console-85965948fd-64qrf" podUID="83c905f3-3f6e-49ad-bdd8-3855069141c5" containerName="console" containerID="cri-o://447daf6134034b1a72097f99e5f04e60b0cc46d49c329cc3a758bb28f026ef8e" gracePeriod=15 Apr 23 17:56:11.366697 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:11.366677 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-85965948fd-64qrf_83c905f3-3f6e-49ad-bdd8-3855069141c5/console/0.log" Apr 23 17:56:11.366972 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:11.366749 2574 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-85965948fd-64qrf" Apr 23 17:56:11.469637 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:11.469614 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83c905f3-3f6e-49ad-bdd8-3855069141c5-service-ca\") pod \"83c905f3-3f6e-49ad-bdd8-3855069141c5\" (UID: \"83c905f3-3f6e-49ad-bdd8-3855069141c5\") " Apr 23 17:56:11.469774 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:11.469646 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83c905f3-3f6e-49ad-bdd8-3855069141c5-trusted-ca-bundle\") pod \"83c905f3-3f6e-49ad-bdd8-3855069141c5\" (UID: \"83c905f3-3f6e-49ad-bdd8-3855069141c5\") " Apr 23 17:56:11.469774 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:11.469670 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/83c905f3-3f6e-49ad-bdd8-3855069141c5-console-config\") pod \"83c905f3-3f6e-49ad-bdd8-3855069141c5\" (UID: \"83c905f3-3f6e-49ad-bdd8-3855069141c5\") " Apr 23 17:56:11.469774 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:11.469700 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/83c905f3-3f6e-49ad-bdd8-3855069141c5-oauth-serving-cert\") pod \"83c905f3-3f6e-49ad-bdd8-3855069141c5\" (UID: \"83c905f3-3f6e-49ad-bdd8-3855069141c5\") " Apr 23 17:56:11.469774 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:11.469745 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rx655\" (UniqueName: \"kubernetes.io/projected/83c905f3-3f6e-49ad-bdd8-3855069141c5-kube-api-access-rx655\") pod \"83c905f3-3f6e-49ad-bdd8-3855069141c5\" (UID: \"83c905f3-3f6e-49ad-bdd8-3855069141c5\") " Apr 23 17:56:11.469985 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:11.469809 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/83c905f3-3f6e-49ad-bdd8-3855069141c5-console-oauth-config\") pod \"83c905f3-3f6e-49ad-bdd8-3855069141c5\" (UID: \"83c905f3-3f6e-49ad-bdd8-3855069141c5\") " Apr 23 17:56:11.469985 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:11.469843 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/83c905f3-3f6e-49ad-bdd8-3855069141c5-console-serving-cert\") pod \"83c905f3-3f6e-49ad-bdd8-3855069141c5\" (UID: \"83c905f3-3f6e-49ad-bdd8-3855069141c5\") " Apr 23 17:56:11.470088 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:11.470039 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83c905f3-3f6e-49ad-bdd8-3855069141c5-service-ca" (OuterVolumeSpecName: "service-ca") pod "83c905f3-3f6e-49ad-bdd8-3855069141c5" (UID: "83c905f3-3f6e-49ad-bdd8-3855069141c5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:56:11.470200 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:11.470172 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83c905f3-3f6e-49ad-bdd8-3855069141c5-console-config" (OuterVolumeSpecName: "console-config") pod "83c905f3-3f6e-49ad-bdd8-3855069141c5" (UID: "83c905f3-3f6e-49ad-bdd8-3855069141c5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:56:11.470260 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:11.470183 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83c905f3-3f6e-49ad-bdd8-3855069141c5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "83c905f3-3f6e-49ad-bdd8-3855069141c5" (UID: "83c905f3-3f6e-49ad-bdd8-3855069141c5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:56:11.470323 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:11.470275 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83c905f3-3f6e-49ad-bdd8-3855069141c5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "83c905f3-3f6e-49ad-bdd8-3855069141c5" (UID: "83c905f3-3f6e-49ad-bdd8-3855069141c5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:56:11.472164 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:11.472137 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c905f3-3f6e-49ad-bdd8-3855069141c5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "83c905f3-3f6e-49ad-bdd8-3855069141c5" (UID: "83c905f3-3f6e-49ad-bdd8-3855069141c5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:56:11.472254 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:11.472207 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83c905f3-3f6e-49ad-bdd8-3855069141c5-kube-api-access-rx655" (OuterVolumeSpecName: "kube-api-access-rx655") pod "83c905f3-3f6e-49ad-bdd8-3855069141c5" (UID: "83c905f3-3f6e-49ad-bdd8-3855069141c5"). InnerVolumeSpecName "kube-api-access-rx655". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 17:56:11.472254 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:11.472222 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c905f3-3f6e-49ad-bdd8-3855069141c5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "83c905f3-3f6e-49ad-bdd8-3855069141c5" (UID: "83c905f3-3f6e-49ad-bdd8-3855069141c5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:56:11.570787 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:11.570744 2574 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/83c905f3-3f6e-49ad-bdd8-3855069141c5-oauth-serving-cert\") on node \"ip-10-0-142-106.ec2.internal\" DevicePath \"\"" Apr 23 17:56:11.570787 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:11.570781 2574 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rx655\" (UniqueName: \"kubernetes.io/projected/83c905f3-3f6e-49ad-bdd8-3855069141c5-kube-api-access-rx655\") on node \"ip-10-0-142-106.ec2.internal\" DevicePath \"\"" Apr 23 17:56:11.570787 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:11.570792 2574 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/83c905f3-3f6e-49ad-bdd8-3855069141c5-console-oauth-config\") on node \"ip-10-0-142-106.ec2.internal\" DevicePath \"\"" Apr 23 17:56:11.570787 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:11.570801 2574 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/83c905f3-3f6e-49ad-bdd8-3855069141c5-console-serving-cert\") on node \"ip-10-0-142-106.ec2.internal\" DevicePath \"\"" Apr 23 17:56:11.571016 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:11.570811 2574 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83c905f3-3f6e-49ad-bdd8-3855069141c5-service-ca\") on node \"ip-10-0-142-106.ec2.internal\" DevicePath \"\"" Apr 23 17:56:11.571016 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:11.570819 2574 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83c905f3-3f6e-49ad-bdd8-3855069141c5-trusted-ca-bundle\") on node \"ip-10-0-142-106.ec2.internal\" DevicePath \"\"" Apr 23 17:56:11.571016 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:11.570828 2574 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/83c905f3-3f6e-49ad-bdd8-3855069141c5-console-config\") on node \"ip-10-0-142-106.ec2.internal\" DevicePath \"\"" Apr 23 17:56:12.203291 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:12.203267 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-85965948fd-64qrf_83c905f3-3f6e-49ad-bdd8-3855069141c5/console/0.log" Apr 23 17:56:12.203441 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:12.203305 2574 generic.go:358] "Generic (PLEG): container finished" podID="83c905f3-3f6e-49ad-bdd8-3855069141c5" containerID="447daf6134034b1a72097f99e5f04e60b0cc46d49c329cc3a758bb28f026ef8e" exitCode=2 Apr 23 17:56:12.203441 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:12.203374 2574 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-85965948fd-64qrf" Apr 23 17:56:12.203441 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:12.203383 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-85965948fd-64qrf" event={"ID":"83c905f3-3f6e-49ad-bdd8-3855069141c5","Type":"ContainerDied","Data":"447daf6134034b1a72097f99e5f04e60b0cc46d49c329cc3a758bb28f026ef8e"} Apr 23 17:56:12.203441 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:12.203414 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-85965948fd-64qrf" event={"ID":"83c905f3-3f6e-49ad-bdd8-3855069141c5","Type":"ContainerDied","Data":"febf3637ce9952692ac4dfb6c3081ded626d1b5998581f745faed17b2187f9dc"} Apr 23 17:56:12.203441 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:12.203430 2574 scope.go:117] "RemoveContainer" containerID="447daf6134034b1a72097f99e5f04e60b0cc46d49c329cc3a758bb28f026ef8e" Apr 23 17:56:12.211486 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:12.211469 2574 scope.go:117] "RemoveContainer" containerID="447daf6134034b1a72097f99e5f04e60b0cc46d49c329cc3a758bb28f026ef8e" Apr 23 17:56:12.211782 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:56:12.211758 2574 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"447daf6134034b1a72097f99e5f04e60b0cc46d49c329cc3a758bb28f026ef8e\": container with ID starting with 447daf6134034b1a72097f99e5f04e60b0cc46d49c329cc3a758bb28f026ef8e not found: ID does not exist" containerID="447daf6134034b1a72097f99e5f04e60b0cc46d49c329cc3a758bb28f026ef8e" Apr 23 17:56:12.211848 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:12.211790 2574 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"447daf6134034b1a72097f99e5f04e60b0cc46d49c329cc3a758bb28f026ef8e"} err="failed to get container status \"447daf6134034b1a72097f99e5f04e60b0cc46d49c329cc3a758bb28f026ef8e\": rpc error: code = NotFound desc = could not find container \"447daf6134034b1a72097f99e5f04e60b0cc46d49c329cc3a758bb28f026ef8e\": container with ID starting with 447daf6134034b1a72097f99e5f04e60b0cc46d49c329cc3a758bb28f026ef8e not found: ID does not exist" Apr 23 17:56:12.225097 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:12.225072 2574 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-console/console-85965948fd-64qrf"] Apr 23 17:56:12.229378 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:12.229358 2574 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-85965948fd-64qrf"] Apr 23 17:56:12.644739 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:12.644701 2574 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83c905f3-3f6e-49ad-bdd8-3855069141c5" path="/var/lib/kubelet/pods/83c905f3-3f6e-49ad-bdd8-3855069141c5/volumes" Apr 23 17:56:18.132121 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:18.132092 2574 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-574f7989c4-mftsr" Apr 23 17:56:18.135959 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:18.135937 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-574f7989c4-mftsr" Apr 23 17:56:41.200081 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:41.200047 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-88zs6" Apr 23 17:56:50.465265 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:50.465229 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-66d4b6db74-8sdzx"] Apr 23 17:56:50.465653 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:50.465498 2574 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="83c905f3-3f6e-49ad-bdd8-3855069141c5" containerName="console" Apr 23 17:56:50.465653 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:50.465509 2574 state_mem.go:107] "Deleted CPUSet assignment" podUID="83c905f3-3f6e-49ad-bdd8-3855069141c5" containerName="console" Apr 23 17:56:50.465653 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:50.465549 2574 memory_manager.go:356] "RemoveStaleState removing state" podUID="83c905f3-3f6e-49ad-bdd8-3855069141c5" containerName="console" Apr 23 17:56:50.470874 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:50.470854 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66d4b6db74-8sdzx" Apr 23 17:56:50.477081 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:50.477058 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-66d4b6db74-8sdzx"] Apr 23 17:56:50.627665 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:50.627638 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c312394a-d8e0-4056-aab6-7d6361de8521-service-ca\") pod \"console-66d4b6db74-8sdzx\" (UID: \"c312394a-d8e0-4056-aab6-7d6361de8521\") " pod="openshift-console/console-66d4b6db74-8sdzx" Apr 23 17:56:50.627774 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:50.627678 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c312394a-d8e0-4056-aab6-7d6361de8521-console-config\") pod \"console-66d4b6db74-8sdzx\" (UID: \"c312394a-d8e0-4056-aab6-7d6361de8521\") " pod="openshift-console/console-66d4b6db74-8sdzx" Apr 23 17:56:50.627774 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:50.627701 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c312394a-d8e0-4056-aab6-7d6361de8521-oauth-serving-cert\") pod \"console-66d4b6db74-8sdzx\" (UID: \"c312394a-d8e0-4056-aab6-7d6361de8521\") " pod="openshift-console/console-66d4b6db74-8sdzx" Apr 23 17:56:50.627774 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:50.627761 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c312394a-d8e0-4056-aab6-7d6361de8521-console-serving-cert\") pod \"console-66d4b6db74-8sdzx\" (UID: \"c312394a-d8e0-4056-aab6-7d6361de8521\") " pod="openshift-console/console-66d4b6db74-8sdzx" Apr 23 17:56:50.627919 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:50.627806 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c312394a-d8e0-4056-aab6-7d6361de8521-trusted-ca-bundle\") pod \"console-66d4b6db74-8sdzx\" (UID: \"c312394a-d8e0-4056-aab6-7d6361de8521\") " pod="openshift-console/console-66d4b6db74-8sdzx" Apr 23 17:56:50.627919 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:50.627833 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c312394a-d8e0-4056-aab6-7d6361de8521-console-oauth-config\") pod \"console-66d4b6db74-8sdzx\" (UID: \"c312394a-d8e0-4056-aab6-7d6361de8521\") " pod="openshift-console/console-66d4b6db74-8sdzx" Apr 23 17:56:50.628012 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:50.627920 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmz78\" (UniqueName: \"kubernetes.io/projected/c312394a-d8e0-4056-aab6-7d6361de8521-kube-api-access-nmz78\") pod \"console-66d4b6db74-8sdzx\" (UID: \"c312394a-d8e0-4056-aab6-7d6361de8521\") " pod="openshift-console/console-66d4b6db74-8sdzx" Apr 23 17:56:50.728500 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:50.728425 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nmz78\" (UniqueName: \"kubernetes.io/projected/c312394a-d8e0-4056-aab6-7d6361de8521-kube-api-access-nmz78\") pod \"console-66d4b6db74-8sdzx\" (UID: \"c312394a-d8e0-4056-aab6-7d6361de8521\") " pod="openshift-console/console-66d4b6db74-8sdzx" Apr 23 17:56:50.728500 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:50.728467 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c312394a-d8e0-4056-aab6-7d6361de8521-service-ca\") pod \"console-66d4b6db74-8sdzx\" (UID: \"c312394a-d8e0-4056-aab6-7d6361de8521\") " pod="openshift-console/console-66d4b6db74-8sdzx" Apr 23 17:56:50.728500 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:50.728491 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c312394a-d8e0-4056-aab6-7d6361de8521-console-config\") pod \"console-66d4b6db74-8sdzx\" (UID: \"c312394a-d8e0-4056-aab6-7d6361de8521\") " pod="openshift-console/console-66d4b6db74-8sdzx" Apr 23 17:56:50.728735 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:50.728514 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c312394a-d8e0-4056-aab6-7d6361de8521-oauth-serving-cert\") pod \"console-66d4b6db74-8sdzx\" (UID: \"c312394a-d8e0-4056-aab6-7d6361de8521\") " pod="openshift-console/console-66d4b6db74-8sdzx" Apr 23 17:56:50.728735 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:50.728548 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c312394a-d8e0-4056-aab6-7d6361de8521-console-serving-cert\") pod \"console-66d4b6db74-8sdzx\" (UID: \"c312394a-d8e0-4056-aab6-7d6361de8521\") " pod="openshift-console/console-66d4b6db74-8sdzx" Apr 23 17:56:50.728735 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:50.728588 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c312394a-d8e0-4056-aab6-7d6361de8521-trusted-ca-bundle\") pod \"console-66d4b6db74-8sdzx\" (UID: \"c312394a-d8e0-4056-aab6-7d6361de8521\") " pod="openshift-console/console-66d4b6db74-8sdzx" Apr 23 17:56:50.728948 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:50.728924 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c312394a-d8e0-4056-aab6-7d6361de8521-console-oauth-config\") pod \"console-66d4b6db74-8sdzx\" (UID: \"c312394a-d8e0-4056-aab6-7d6361de8521\") " pod="openshift-console/console-66d4b6db74-8sdzx" Apr 23 17:56:50.729177 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:50.729155 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c312394a-d8e0-4056-aab6-7d6361de8521-service-ca\") pod \"console-66d4b6db74-8sdzx\" (UID: \"c312394a-d8e0-4056-aab6-7d6361de8521\") " pod="openshift-console/console-66d4b6db74-8sdzx" Apr 23 17:56:50.729325 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:50.729298 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c312394a-d8e0-4056-aab6-7d6361de8521-console-config\") pod \"console-66d4b6db74-8sdzx\" (UID: \"c312394a-d8e0-4056-aab6-7d6361de8521\") " pod="openshift-console/console-66d4b6db74-8sdzx" Apr 23 17:56:50.729659 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:50.729635 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c312394a-d8e0-4056-aab6-7d6361de8521-oauth-serving-cert\") pod \"console-66d4b6db74-8sdzx\" (UID: \"c312394a-d8e0-4056-aab6-7d6361de8521\") " pod="openshift-console/console-66d4b6db74-8sdzx" Apr 23 17:56:50.729843 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:50.729823 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c312394a-d8e0-4056-aab6-7d6361de8521-trusted-ca-bundle\") pod \"console-66d4b6db74-8sdzx\" (UID: \"c312394a-d8e0-4056-aab6-7d6361de8521\") " pod="openshift-console/console-66d4b6db74-8sdzx" Apr 23 17:56:50.731136 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:50.731117 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c312394a-d8e0-4056-aab6-7d6361de8521-console-serving-cert\") pod \"console-66d4b6db74-8sdzx\" (UID: \"c312394a-d8e0-4056-aab6-7d6361de8521\") " pod="openshift-console/console-66d4b6db74-8sdzx" Apr 23 17:56:50.731368 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:50.731346 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c312394a-d8e0-4056-aab6-7d6361de8521-console-oauth-config\") pod \"console-66d4b6db74-8sdzx\" (UID: \"c312394a-d8e0-4056-aab6-7d6361de8521\") " pod="openshift-console/console-66d4b6db74-8sdzx" Apr 23 17:56:50.737525 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:50.737505 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmz78\" (UniqueName: \"kubernetes.io/projected/c312394a-d8e0-4056-aab6-7d6361de8521-kube-api-access-nmz78\") pod \"console-66d4b6db74-8sdzx\" (UID: \"c312394a-d8e0-4056-aab6-7d6361de8521\") " pod="openshift-console/console-66d4b6db74-8sdzx" Apr 23 17:56:50.780503 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:50.780478 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66d4b6db74-8sdzx" Apr 23 17:56:50.893719 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:50.893691 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-66d4b6db74-8sdzx"] Apr 23 17:56:50.898655 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:56:50.898627 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc312394a_d8e0_4056_aab6_7d6361de8521.slice/crio-037615e5fac32a8f6ded482af3178661741f779970391545ff59ff6853a8aa82 WatchSource:0}: Error finding container 037615e5fac32a8f6ded482af3178661741f779970391545ff59ff6853a8aa82: Status 404 returned error can't find the container with id 037615e5fac32a8f6ded482af3178661741f779970391545ff59ff6853a8aa82 Apr 23 17:56:51.305878 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:51.305844 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66d4b6db74-8sdzx" event={"ID":"c312394a-d8e0-4056-aab6-7d6361de8521","Type":"ContainerStarted","Data":"6ba471a405284910b32eeb1948786c48b222674fe7a1b4a98fdf0f6833abde15"} Apr 23 17:56:51.305878 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:51.305882 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66d4b6db74-8sdzx" event={"ID":"c312394a-d8e0-4056-aab6-7d6361de8521","Type":"ContainerStarted","Data":"037615e5fac32a8f6ded482af3178661741f779970391545ff59ff6853a8aa82"} Apr 23 17:56:51.326695 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:56:51.326648 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-66d4b6db74-8sdzx" podStartSLOduration=1.326634314 podStartE2EDuration="1.326634314s" podCreationTimestamp="2026-04-23 17:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:56:51.325898714 +0000 UTC m=+249.223352485" watchObservedRunningTime="2026-04-23 17:56:51.326634314 +0000 UTC m=+249.224088084" Apr 23 17:57:00.781057 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:00.780931 2574 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-66d4b6db74-8sdzx" Apr 23 17:57:00.781057 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:00.780966 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-66d4b6db74-8sdzx" Apr 23 17:57:00.785508 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:00.785483 2574 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-66d4b6db74-8sdzx" Apr 23 17:57:01.340338 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:01.340311 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-66d4b6db74-8sdzx" Apr 23 17:57:01.400236 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:01.400205 2574 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-console/console-848b8cf89c-v72pl"] Apr 23 17:57:26.419883 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:26.419823 2574 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-console/console-848b8cf89c-v72pl" podUID="e51e2ea4-6814-44c7-8e06-9ac9f446025a" containerName="console" containerID="cri-o://057c6a54767edabeacda2821a05bfd0b845ba6a8994d8761508208daf0119f2e" gracePeriod=15 Apr 23 17:57:26.650533 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:26.650512 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-848b8cf89c-v72pl_e51e2ea4-6814-44c7-8e06-9ac9f446025a/console/0.log" Apr 23 17:57:26.650655 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:26.650567 2574 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-848b8cf89c-v72pl" Apr 23 17:57:26.769676 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:26.769649 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e51e2ea4-6814-44c7-8e06-9ac9f446025a-console-serving-cert\") pod \"e51e2ea4-6814-44c7-8e06-9ac9f446025a\" (UID: \"e51e2ea4-6814-44c7-8e06-9ac9f446025a\") " Apr 23 17:57:26.769822 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:26.769684 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e51e2ea4-6814-44c7-8e06-9ac9f446025a-console-config\") pod \"e51e2ea4-6814-44c7-8e06-9ac9f446025a\" (UID: \"e51e2ea4-6814-44c7-8e06-9ac9f446025a\") " Apr 23 17:57:26.769822 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:26.769721 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e51e2ea4-6814-44c7-8e06-9ac9f446025a-oauth-serving-cert\") pod \"e51e2ea4-6814-44c7-8e06-9ac9f446025a\" (UID: \"e51e2ea4-6814-44c7-8e06-9ac9f446025a\") " Apr 23 17:57:26.769941 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:26.769827 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e51e2ea4-6814-44c7-8e06-9ac9f446025a-service-ca\") pod \"e51e2ea4-6814-44c7-8e06-9ac9f446025a\" (UID: \"e51e2ea4-6814-44c7-8e06-9ac9f446025a\") " Apr 23 17:57:26.769941 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:26.769878 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27jk7\" (UniqueName: \"kubernetes.io/projected/e51e2ea4-6814-44c7-8e06-9ac9f446025a-kube-api-access-27jk7\") pod \"e51e2ea4-6814-44c7-8e06-9ac9f446025a\" (UID: \"e51e2ea4-6814-44c7-8e06-9ac9f446025a\") " Apr 23 17:57:26.769941 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:26.769932 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e51e2ea4-6814-44c7-8e06-9ac9f446025a-console-oauth-config\") pod \"e51e2ea4-6814-44c7-8e06-9ac9f446025a\" (UID: \"e51e2ea4-6814-44c7-8e06-9ac9f446025a\") " Apr 23 17:57:26.770087 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:26.769962 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e51e2ea4-6814-44c7-8e06-9ac9f446025a-trusted-ca-bundle\") pod \"e51e2ea4-6814-44c7-8e06-9ac9f446025a\" (UID: \"e51e2ea4-6814-44c7-8e06-9ac9f446025a\") " Apr 23 17:57:26.770141 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:26.770110 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e51e2ea4-6814-44c7-8e06-9ac9f446025a-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "e51e2ea4-6814-44c7-8e06-9ac9f446025a" (UID: "e51e2ea4-6814-44c7-8e06-9ac9f446025a"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:57:26.770141 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:26.770124 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e51e2ea4-6814-44c7-8e06-9ac9f446025a-console-config" (OuterVolumeSpecName: "console-config") pod "e51e2ea4-6814-44c7-8e06-9ac9f446025a" (UID: "e51e2ea4-6814-44c7-8e06-9ac9f446025a"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:57:26.770238 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:26.770213 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e51e2ea4-6814-44c7-8e06-9ac9f446025a-service-ca" (OuterVolumeSpecName: "service-ca") pod "e51e2ea4-6814-44c7-8e06-9ac9f446025a" (UID: "e51e2ea4-6814-44c7-8e06-9ac9f446025a"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:57:26.770536 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:26.770513 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e51e2ea4-6814-44c7-8e06-9ac9f446025a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "e51e2ea4-6814-44c7-8e06-9ac9f446025a" (UID: "e51e2ea4-6814-44c7-8e06-9ac9f446025a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:57:26.772031 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:26.771999 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e51e2ea4-6814-44c7-8e06-9ac9f446025a-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "e51e2ea4-6814-44c7-8e06-9ac9f446025a" (UID: "e51e2ea4-6814-44c7-8e06-9ac9f446025a"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:57:26.772119 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:26.772039 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e51e2ea4-6814-44c7-8e06-9ac9f446025a-kube-api-access-27jk7" (OuterVolumeSpecName: "kube-api-access-27jk7") pod "e51e2ea4-6814-44c7-8e06-9ac9f446025a" (UID: "e51e2ea4-6814-44c7-8e06-9ac9f446025a"). InnerVolumeSpecName "kube-api-access-27jk7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 17:57:26.772119 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:26.772064 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e51e2ea4-6814-44c7-8e06-9ac9f446025a-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "e51e2ea4-6814-44c7-8e06-9ac9f446025a" (UID: "e51e2ea4-6814-44c7-8e06-9ac9f446025a"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:57:26.871048 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:26.871024 2574 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e51e2ea4-6814-44c7-8e06-9ac9f446025a-oauth-serving-cert\") on node \"ip-10-0-142-106.ec2.internal\" DevicePath \"\"" Apr 23 17:57:26.871048 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:26.871045 2574 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e51e2ea4-6814-44c7-8e06-9ac9f446025a-service-ca\") on node \"ip-10-0-142-106.ec2.internal\" DevicePath \"\"" Apr 23 17:57:26.871166 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:26.871056 2574 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-27jk7\" (UniqueName: \"kubernetes.io/projected/e51e2ea4-6814-44c7-8e06-9ac9f446025a-kube-api-access-27jk7\") on node \"ip-10-0-142-106.ec2.internal\" DevicePath \"\"" Apr 23 17:57:26.871166 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:26.871064 2574 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e51e2ea4-6814-44c7-8e06-9ac9f446025a-console-oauth-config\") on node \"ip-10-0-142-106.ec2.internal\" DevicePath \"\"" Apr 23 17:57:26.871166 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:26.871073 2574 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e51e2ea4-6814-44c7-8e06-9ac9f446025a-trusted-ca-bundle\") on node \"ip-10-0-142-106.ec2.internal\" DevicePath \"\"" Apr 23 17:57:26.871166 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:26.871082 2574 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e51e2ea4-6814-44c7-8e06-9ac9f446025a-console-serving-cert\") on node \"ip-10-0-142-106.ec2.internal\" DevicePath \"\"" Apr 23 17:57:26.871166 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:26.871090 2574 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e51e2ea4-6814-44c7-8e06-9ac9f446025a-console-config\") on node \"ip-10-0-142-106.ec2.internal\" DevicePath \"\"" Apr 23 17:57:27.415341 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:27.415316 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-848b8cf89c-v72pl_e51e2ea4-6814-44c7-8e06-9ac9f446025a/console/0.log" Apr 23 17:57:27.415469 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:27.415367 2574 generic.go:358] "Generic (PLEG): container finished" podID="e51e2ea4-6814-44c7-8e06-9ac9f446025a" containerID="057c6a54767edabeacda2821a05bfd0b845ba6a8994d8761508208daf0119f2e" exitCode=2 Apr 23 17:57:27.415469 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:27.415399 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-848b8cf89c-v72pl" event={"ID":"e51e2ea4-6814-44c7-8e06-9ac9f446025a","Type":"ContainerDied","Data":"057c6a54767edabeacda2821a05bfd0b845ba6a8994d8761508208daf0119f2e"} Apr 23 17:57:27.415469 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:27.415436 2574 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-848b8cf89c-v72pl" Apr 23 17:57:27.415469 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:27.415448 2574 scope.go:117] "RemoveContainer" containerID="057c6a54767edabeacda2821a05bfd0b845ba6a8994d8761508208daf0119f2e" Apr 23 17:57:27.415662 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:27.415437 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-848b8cf89c-v72pl" event={"ID":"e51e2ea4-6814-44c7-8e06-9ac9f446025a","Type":"ContainerDied","Data":"dcde9fb861f5137d2a899b9d87fe823de1e853071ddf379143cd62a2d963c791"} Apr 23 17:57:27.423780 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:27.423543 2574 scope.go:117] "RemoveContainer" containerID="057c6a54767edabeacda2821a05bfd0b845ba6a8994d8761508208daf0119f2e" Apr 23 17:57:27.424051 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:57:27.423935 2574 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"057c6a54767edabeacda2821a05bfd0b845ba6a8994d8761508208daf0119f2e\": container with ID starting with 057c6a54767edabeacda2821a05bfd0b845ba6a8994d8761508208daf0119f2e not found: ID does not exist" containerID="057c6a54767edabeacda2821a05bfd0b845ba6a8994d8761508208daf0119f2e" Apr 23 17:57:27.424051 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:27.423972 2574 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"057c6a54767edabeacda2821a05bfd0b845ba6a8994d8761508208daf0119f2e"} err="failed to get container status \"057c6a54767edabeacda2821a05bfd0b845ba6a8994d8761508208daf0119f2e\": rpc error: code = NotFound desc = could not find container \"057c6a54767edabeacda2821a05bfd0b845ba6a8994d8761508208daf0119f2e\": container with ID starting with 057c6a54767edabeacda2821a05bfd0b845ba6a8994d8761508208daf0119f2e not found: ID does not exist" Apr 23 17:57:27.438121 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:27.438100 2574 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-console/console-848b8cf89c-v72pl"] Apr 23 17:57:27.443357 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:27.443334 2574 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-848b8cf89c-v72pl"] Apr 23 17:57:28.644780 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:28.644745 2574 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e51e2ea4-6814-44c7-8e06-9ac9f446025a" path="/var/lib/kubelet/pods/e51e2ea4-6814-44c7-8e06-9ac9f446025a/volumes" Apr 23 17:57:41.183843 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:41.183800 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c5bsvr"] Apr 23 17:57:41.184363 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:41.184076 2574 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e51e2ea4-6814-44c7-8e06-9ac9f446025a" containerName="console" Apr 23 17:57:41.184363 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:41.184090 2574 state_mem.go:107] "Deleted CPUSet assignment" podUID="e51e2ea4-6814-44c7-8e06-9ac9f446025a" containerName="console" Apr 23 17:57:41.184363 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:41.184144 2574 memory_manager.go:356] "RemoveStaleState removing state" podUID="e51e2ea4-6814-44c7-8e06-9ac9f446025a" containerName="console" Apr 23 17:57:41.187652 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:41.187629 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c5bsvr" Apr 23 17:57:41.190284 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:41.190249 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Apr 23 17:57:41.190403 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:41.190382 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-nrhlz\"" Apr 23 17:57:41.190452 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:41.190399 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Apr 23 17:57:41.195196 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:41.195152 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c5bsvr"] Apr 23 17:57:41.264348 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:41.264321 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4ee0472a-44c1-4bc4-ae2a-9a583e88abc2-util\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c5bsvr\" (UID: \"4ee0472a-44c1-4bc4-ae2a-9a583e88abc2\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c5bsvr" Apr 23 17:57:41.264485 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:41.264356 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4ee0472a-44c1-4bc4-ae2a-9a583e88abc2-bundle\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c5bsvr\" (UID: \"4ee0472a-44c1-4bc4-ae2a-9a583e88abc2\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c5bsvr" Apr 23 17:57:41.264485 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:41.264380 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsjc7\" (UniqueName: \"kubernetes.io/projected/4ee0472a-44c1-4bc4-ae2a-9a583e88abc2-kube-api-access-dsjc7\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c5bsvr\" (UID: \"4ee0472a-44c1-4bc4-ae2a-9a583e88abc2\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c5bsvr" Apr 23 17:57:41.365589 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:41.365547 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4ee0472a-44c1-4bc4-ae2a-9a583e88abc2-util\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c5bsvr\" (UID: \"4ee0472a-44c1-4bc4-ae2a-9a583e88abc2\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c5bsvr" Apr 23 17:57:41.365722 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:41.365610 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4ee0472a-44c1-4bc4-ae2a-9a583e88abc2-bundle\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c5bsvr\" (UID: \"4ee0472a-44c1-4bc4-ae2a-9a583e88abc2\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c5bsvr" Apr 23 17:57:41.365722 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:41.365643 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsjc7\" (UniqueName: \"kubernetes.io/projected/4ee0472a-44c1-4bc4-ae2a-9a583e88abc2-kube-api-access-dsjc7\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c5bsvr\" (UID: \"4ee0472a-44c1-4bc4-ae2a-9a583e88abc2\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c5bsvr" Apr 23 17:57:41.365953 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:41.365933 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4ee0472a-44c1-4bc4-ae2a-9a583e88abc2-util\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c5bsvr\" (UID: \"4ee0472a-44c1-4bc4-ae2a-9a583e88abc2\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c5bsvr" Apr 23 17:57:41.366011 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:41.365963 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4ee0472a-44c1-4bc4-ae2a-9a583e88abc2-bundle\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c5bsvr\" (UID: \"4ee0472a-44c1-4bc4-ae2a-9a583e88abc2\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c5bsvr" Apr 23 17:57:41.375388 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:41.375359 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsjc7\" (UniqueName: \"kubernetes.io/projected/4ee0472a-44c1-4bc4-ae2a-9a583e88abc2-kube-api-access-dsjc7\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c5bsvr\" (UID: \"4ee0472a-44c1-4bc4-ae2a-9a583e88abc2\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c5bsvr" Apr 23 17:57:41.499043 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:41.499012 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c5bsvr" Apr 23 17:57:41.611861 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:41.611824 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c5bsvr"] Apr 23 17:57:41.614518 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:57:41.614489 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ee0472a_44c1_4bc4_ae2a_9a583e88abc2.slice/crio-5b7b19db100fcfb5243ed555b1f4a4494ee3a374a5de96b501e05ab25e29bc33 WatchSource:0}: Error finding container 5b7b19db100fcfb5243ed555b1f4a4494ee3a374a5de96b501e05ab25e29bc33: Status 404 returned error can't find the container with id 5b7b19db100fcfb5243ed555b1f4a4494ee3a374a5de96b501e05ab25e29bc33 Apr 23 17:57:42.460381 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:42.460330 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c5bsvr" event={"ID":"4ee0472a-44c1-4bc4-ae2a-9a583e88abc2","Type":"ContainerStarted","Data":"5b7b19db100fcfb5243ed555b1f4a4494ee3a374a5de96b501e05ab25e29bc33"} Apr 23 17:57:42.534444 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:42.534405 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/ovn-acl-logging/0.log" Apr 23 17:57:42.535082 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:42.535060 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/ovn-acl-logging/0.log" Apr 23 17:57:42.536525 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:42.536503 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/3.log" Apr 23 17:57:42.536732 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:42.536711 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/3.log" Apr 23 17:57:42.538253 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:42.538234 2574 kubelet.go:1628] "Image garbage collection succeeded" Apr 23 17:57:47.476136 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:47.476092 2574 generic.go:358] "Generic (PLEG): container finished" podID="4ee0472a-44c1-4bc4-ae2a-9a583e88abc2" containerID="cd3d63d9c5ab6485eb597be05dae10b12cc08878e09ac664cb0e97c521e3bbbd" exitCode=0 Apr 23 17:57:47.476565 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:47.476149 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c5bsvr" event={"ID":"4ee0472a-44c1-4bc4-ae2a-9a583e88abc2","Type":"ContainerDied","Data":"cd3d63d9c5ab6485eb597be05dae10b12cc08878e09ac664cb0e97c521e3bbbd"} Apr 23 17:57:47.477177 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:47.477152 2574 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 17:57:50.490859 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:50.490820 2574 generic.go:358] "Generic (PLEG): container finished" podID="4ee0472a-44c1-4bc4-ae2a-9a583e88abc2" containerID="fd6970f1167877b850b9cf4b250b054baeb2eca0ea410fc24a586a3e6ddf101d" exitCode=0 Apr 23 17:57:50.491284 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:50.490914 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c5bsvr" event={"ID":"4ee0472a-44c1-4bc4-ae2a-9a583e88abc2","Type":"ContainerDied","Data":"fd6970f1167877b850b9cf4b250b054baeb2eca0ea410fc24a586a3e6ddf101d"} Apr 23 17:57:57.512663 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:57.512612 2574 generic.go:358] "Generic (PLEG): container finished" podID="4ee0472a-44c1-4bc4-ae2a-9a583e88abc2" containerID="4fd7750f9423370e0ffaba8e89c7e8dc2f8287c45afdd63f066eb3c6ac13c502" exitCode=0 Apr 23 17:57:57.513053 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:57.512704 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c5bsvr" event={"ID":"4ee0472a-44c1-4bc4-ae2a-9a583e88abc2","Type":"ContainerDied","Data":"4fd7750f9423370e0ffaba8e89c7e8dc2f8287c45afdd63f066eb3c6ac13c502"} Apr 23 17:57:58.627875 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:58.627852 2574 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c5bsvr" Apr 23 17:57:58.702072 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:58.702051 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4ee0472a-44c1-4bc4-ae2a-9a583e88abc2-util\") pod \"4ee0472a-44c1-4bc4-ae2a-9a583e88abc2\" (UID: \"4ee0472a-44c1-4bc4-ae2a-9a583e88abc2\") " Apr 23 17:57:58.702211 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:58.702088 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4ee0472a-44c1-4bc4-ae2a-9a583e88abc2-bundle\") pod \"4ee0472a-44c1-4bc4-ae2a-9a583e88abc2\" (UID: \"4ee0472a-44c1-4bc4-ae2a-9a583e88abc2\") " Apr 23 17:57:58.702211 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:58.702141 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsjc7\" (UniqueName: \"kubernetes.io/projected/4ee0472a-44c1-4bc4-ae2a-9a583e88abc2-kube-api-access-dsjc7\") pod \"4ee0472a-44c1-4bc4-ae2a-9a583e88abc2\" (UID: \"4ee0472a-44c1-4bc4-ae2a-9a583e88abc2\") " Apr 23 17:57:58.702746 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:58.702717 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ee0472a-44c1-4bc4-ae2a-9a583e88abc2-bundle" (OuterVolumeSpecName: "bundle") pod "4ee0472a-44c1-4bc4-ae2a-9a583e88abc2" (UID: "4ee0472a-44c1-4bc4-ae2a-9a583e88abc2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 23 17:57:58.704195 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:58.704171 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ee0472a-44c1-4bc4-ae2a-9a583e88abc2-kube-api-access-dsjc7" (OuterVolumeSpecName: "kube-api-access-dsjc7") pod "4ee0472a-44c1-4bc4-ae2a-9a583e88abc2" (UID: "4ee0472a-44c1-4bc4-ae2a-9a583e88abc2"). InnerVolumeSpecName "kube-api-access-dsjc7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 17:57:58.706412 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:58.706382 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ee0472a-44c1-4bc4-ae2a-9a583e88abc2-util" (OuterVolumeSpecName: "util") pod "4ee0472a-44c1-4bc4-ae2a-9a583e88abc2" (UID: "4ee0472a-44c1-4bc4-ae2a-9a583e88abc2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 23 17:57:58.803316 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:58.803266 2574 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4ee0472a-44c1-4bc4-ae2a-9a583e88abc2-util\") on node \"ip-10-0-142-106.ec2.internal\" DevicePath \"\"" Apr 23 17:57:58.803316 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:58.803286 2574 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4ee0472a-44c1-4bc4-ae2a-9a583e88abc2-bundle\") on node \"ip-10-0-142-106.ec2.internal\" DevicePath \"\"" Apr 23 17:57:58.803316 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:58.803296 2574 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dsjc7\" (UniqueName: \"kubernetes.io/projected/4ee0472a-44c1-4bc4-ae2a-9a583e88abc2-kube-api-access-dsjc7\") on node \"ip-10-0-142-106.ec2.internal\" DevicePath \"\"" Apr 23 17:57:59.519139 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:59.519097 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c5bsvr" event={"ID":"4ee0472a-44c1-4bc4-ae2a-9a583e88abc2","Type":"ContainerDied","Data":"5b7b19db100fcfb5243ed555b1f4a4494ee3a374a5de96b501e05ab25e29bc33"} Apr 23 17:57:59.519139 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:59.519133 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b7b19db100fcfb5243ed555b1f4a4494ee3a374a5de96b501e05ab25e29bc33" Apr 23 17:57:59.519335 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:57:59.519154 2574 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c5bsvr" Apr 23 17:58:03.137866 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:03.137833 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-qx85w"] Apr 23 17:58:03.138294 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:03.138079 2574 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4ee0472a-44c1-4bc4-ae2a-9a583e88abc2" containerName="util" Apr 23 17:58:03.138294 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:03.138090 2574 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ee0472a-44c1-4bc4-ae2a-9a583e88abc2" containerName="util" Apr 23 17:58:03.138294 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:03.138108 2574 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4ee0472a-44c1-4bc4-ae2a-9a583e88abc2" containerName="extract" Apr 23 17:58:03.138294 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:03.138114 2574 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ee0472a-44c1-4bc4-ae2a-9a583e88abc2" containerName="extract" Apr 23 17:58:03.138294 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:03.138122 2574 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4ee0472a-44c1-4bc4-ae2a-9a583e88abc2" containerName="pull" Apr 23 17:58:03.138294 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:03.138128 2574 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ee0472a-44c1-4bc4-ae2a-9a583e88abc2" containerName="pull" Apr 23 17:58:03.138294 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:03.138161 2574 memory_manager.go:356] "RemoveStaleState removing state" podUID="4ee0472a-44c1-4bc4-ae2a-9a583e88abc2" containerName="extract" Apr 23 17:58:03.144998 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:03.144980 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-qx85w" Apr 23 17:58:03.147636 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:03.147614 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"custom-metrics-autoscaler-operator-dockercfg-k7xcj\"" Apr 23 17:58:03.147849 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:03.147820 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"kube-root-ca.crt\"" Apr 23 17:58:03.148244 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:03.148226 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"kedaorg-certs\"" Apr 23 17:58:03.149988 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:03.149970 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"openshift-service-ca.crt\"" Apr 23 17:58:03.153276 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:03.153250 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-qx85w"] Apr 23 17:58:03.232964 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:03.232940 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/secret/2b3b2548-5da8-4804-bdeb-6b45f81dc106-certificates\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-qx85w\" (UID: \"2b3b2548-5da8-4804-bdeb-6b45f81dc106\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-qx85w" Apr 23 17:58:03.233063 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:03.232977 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hvkq\" (UniqueName: \"kubernetes.io/projected/2b3b2548-5da8-4804-bdeb-6b45f81dc106-kube-api-access-5hvkq\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-qx85w\" (UID: \"2b3b2548-5da8-4804-bdeb-6b45f81dc106\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-qx85w" Apr 23 17:58:03.334171 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:03.334132 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/secret/2b3b2548-5da8-4804-bdeb-6b45f81dc106-certificates\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-qx85w\" (UID: \"2b3b2548-5da8-4804-bdeb-6b45f81dc106\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-qx85w" Apr 23 17:58:03.334266 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:03.334183 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5hvkq\" (UniqueName: \"kubernetes.io/projected/2b3b2548-5da8-4804-bdeb-6b45f81dc106-kube-api-access-5hvkq\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-qx85w\" (UID: \"2b3b2548-5da8-4804-bdeb-6b45f81dc106\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-qx85w" Apr 23 17:58:03.336436 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:03.336414 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/secret/2b3b2548-5da8-4804-bdeb-6b45f81dc106-certificates\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-qx85w\" (UID: \"2b3b2548-5da8-4804-bdeb-6b45f81dc106\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-qx85w" Apr 23 17:58:03.349608 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:03.349563 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hvkq\" (UniqueName: \"kubernetes.io/projected/2b3b2548-5da8-4804-bdeb-6b45f81dc106-kube-api-access-5hvkq\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-qx85w\" (UID: \"2b3b2548-5da8-4804-bdeb-6b45f81dc106\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-qx85w" Apr 23 17:58:03.455908 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:03.455845 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-qx85w" Apr 23 17:58:03.593137 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:03.593112 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-qx85w"] Apr 23 17:58:03.595704 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:58:03.595676 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b3b2548_5da8_4804_bdeb_6b45f81dc106.slice/crio-fcc7992306650abf7d5f6c8fe4f8f3ecffb81521508a2986ba1404ecb5aa5ba1 WatchSource:0}: Error finding container fcc7992306650abf7d5f6c8fe4f8f3ecffb81521508a2986ba1404ecb5aa5ba1: Status 404 returned error can't find the container with id fcc7992306650abf7d5f6c8fe4f8f3ecffb81521508a2986ba1404ecb5aa5ba1 Apr 23 17:58:04.546200 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:04.546157 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-qx85w" event={"ID":"2b3b2548-5da8-4804-bdeb-6b45f81dc106","Type":"ContainerStarted","Data":"fcc7992306650abf7d5f6c8fe4f8f3ecffb81521508a2986ba1404ecb5aa5ba1"} Apr 23 17:58:07.411720 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:07.411685 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/keda-operator-ffbb595cb-6q6bp"] Apr 23 17:58:07.414698 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:07.414683 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-operator-ffbb595cb-6q6bp" Apr 23 17:58:07.416990 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:07.416970 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-operator-certs\"" Apr 23 17:58:07.417104 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:07.417067 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"keda-ocp-cabundle\"" Apr 23 17:58:07.417104 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:07.417077 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-operator-dockercfg-2ctcg\"" Apr 23 17:58:07.422603 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:07.422561 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-operator-ffbb595cb-6q6bp"] Apr 23 17:58:07.468997 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:07.468975 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cabundle0\" (UniqueName: \"kubernetes.io/configmap/1488f8ce-4a38-42c4-bf06-11aecf7277e0-cabundle0\") pod \"keda-operator-ffbb595cb-6q6bp\" (UID: \"1488f8ce-4a38-42c4-bf06-11aecf7277e0\") " pod="openshift-keda/keda-operator-ffbb595cb-6q6bp" Apr 23 17:58:07.469107 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:07.469011 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkv5v\" (UniqueName: \"kubernetes.io/projected/1488f8ce-4a38-42c4-bf06-11aecf7277e0-kube-api-access-pkv5v\") pod \"keda-operator-ffbb595cb-6q6bp\" (UID: \"1488f8ce-4a38-42c4-bf06-11aecf7277e0\") " pod="openshift-keda/keda-operator-ffbb595cb-6q6bp" Apr 23 17:58:07.469107 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:07.469067 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/1488f8ce-4a38-42c4-bf06-11aecf7277e0-certificates\") pod \"keda-operator-ffbb595cb-6q6bp\" (UID: \"1488f8ce-4a38-42c4-bf06-11aecf7277e0\") " pod="openshift-keda/keda-operator-ffbb595cb-6q6bp" Apr 23 17:58:07.556631 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:07.556601 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-qx85w" event={"ID":"2b3b2548-5da8-4804-bdeb-6b45f81dc106","Type":"ContainerStarted","Data":"89d29d0dd267ae4ec5dec94510531e0816f8063164b4158b69723b4603faac3b"} Apr 23 17:58:07.556793 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:07.556772 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-qx85w" Apr 23 17:58:07.570132 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:07.570106 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cabundle0\" (UniqueName: \"kubernetes.io/configmap/1488f8ce-4a38-42c4-bf06-11aecf7277e0-cabundle0\") pod \"keda-operator-ffbb595cb-6q6bp\" (UID: \"1488f8ce-4a38-42c4-bf06-11aecf7277e0\") " pod="openshift-keda/keda-operator-ffbb595cb-6q6bp" Apr 23 17:58:07.570247 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:07.570155 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pkv5v\" (UniqueName: \"kubernetes.io/projected/1488f8ce-4a38-42c4-bf06-11aecf7277e0-kube-api-access-pkv5v\") pod \"keda-operator-ffbb595cb-6q6bp\" (UID: \"1488f8ce-4a38-42c4-bf06-11aecf7277e0\") " pod="openshift-keda/keda-operator-ffbb595cb-6q6bp" Apr 23 17:58:07.570247 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:07.570213 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/1488f8ce-4a38-42c4-bf06-11aecf7277e0-certificates\") pod \"keda-operator-ffbb595cb-6q6bp\" (UID: \"1488f8ce-4a38-42c4-bf06-11aecf7277e0\") " pod="openshift-keda/keda-operator-ffbb595cb-6q6bp" Apr 23 17:58:07.570363 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:58:07.570330 2574 secret.go:281] references non-existent secret key: ca.crt Apr 23 17:58:07.570363 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:58:07.570345 2574 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 23 17:58:07.570363 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:58:07.570355 2574 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-6q6bp: references non-existent secret key: ca.crt Apr 23 17:58:07.570530 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:58:07.570413 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1488f8ce-4a38-42c4-bf06-11aecf7277e0-certificates podName:1488f8ce-4a38-42c4-bf06-11aecf7277e0 nodeName:}" failed. No retries permitted until 2026-04-23 17:58:08.070394549 +0000 UTC m=+325.967848304 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/1488f8ce-4a38-42c4-bf06-11aecf7277e0-certificates") pod "keda-operator-ffbb595cb-6q6bp" (UID: "1488f8ce-4a38-42c4-bf06-11aecf7277e0") : references non-existent secret key: ca.crt Apr 23 17:58:07.570939 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:07.570899 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cabundle0\" (UniqueName: \"kubernetes.io/configmap/1488f8ce-4a38-42c4-bf06-11aecf7277e0-cabundle0\") pod \"keda-operator-ffbb595cb-6q6bp\" (UID: \"1488f8ce-4a38-42c4-bf06-11aecf7277e0\") " pod="openshift-keda/keda-operator-ffbb595cb-6q6bp" Apr 23 17:58:07.587808 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:07.587747 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-qx85w" podStartSLOduration=1.381618065 podStartE2EDuration="4.587737271s" podCreationTimestamp="2026-04-23 17:58:03 +0000 UTC" firstStartedPulling="2026-04-23 17:58:03.597471968 +0000 UTC m=+321.494925715" lastFinishedPulling="2026-04-23 17:58:06.803591173 +0000 UTC m=+324.701044921" observedRunningTime="2026-04-23 17:58:07.585947807 +0000 UTC m=+325.483401581" watchObservedRunningTime="2026-04-23 17:58:07.587737271 +0000 UTC m=+325.485191041" Apr 23 17:58:07.592452 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:07.592430 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkv5v\" (UniqueName: \"kubernetes.io/projected/1488f8ce-4a38-42c4-bf06-11aecf7277e0-kube-api-access-pkv5v\") pod \"keda-operator-ffbb595cb-6q6bp\" (UID: \"1488f8ce-4a38-42c4-bf06-11aecf7277e0\") " pod="openshift-keda/keda-operator-ffbb595cb-6q6bp" Apr 23 17:58:07.691301 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:07.691246 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/keda-metrics-apiserver-7c9f485588-wzhmr"] Apr 23 17:58:07.694340 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:07.694327 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-wzhmr" Apr 23 17:58:07.696869 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:07.696850 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-metrics-apiserver-certs\"" Apr 23 17:58:07.702352 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:07.702328 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-metrics-apiserver-7c9f485588-wzhmr"] Apr 23 17:58:07.771537 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:07.771512 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"temp-vol\" (UniqueName: \"kubernetes.io/empty-dir/3b6273ea-649c-4bb4-8ca0-047aa0046706-temp-vol\") pod \"keda-metrics-apiserver-7c9f485588-wzhmr\" (UID: \"3b6273ea-649c-4bb4-8ca0-047aa0046706\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-wzhmr" Apr 23 17:58:07.771654 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:07.771539 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89dtm\" (UniqueName: \"kubernetes.io/projected/3b6273ea-649c-4bb4-8ca0-047aa0046706-kube-api-access-89dtm\") pod \"keda-metrics-apiserver-7c9f485588-wzhmr\" (UID: \"3b6273ea-649c-4bb4-8ca0-047aa0046706\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-wzhmr" Apr 23 17:58:07.771654 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:07.771604 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/3b6273ea-649c-4bb4-8ca0-047aa0046706-certificates\") pod \"keda-metrics-apiserver-7c9f485588-wzhmr\" (UID: \"3b6273ea-649c-4bb4-8ca0-047aa0046706\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-wzhmr" Apr 23 17:58:07.872638 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:07.872615 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-89dtm\" (UniqueName: \"kubernetes.io/projected/3b6273ea-649c-4bb4-8ca0-047aa0046706-kube-api-access-89dtm\") pod \"keda-metrics-apiserver-7c9f485588-wzhmr\" (UID: \"3b6273ea-649c-4bb4-8ca0-047aa0046706\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-wzhmr" Apr 23 17:58:07.872769 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:07.872714 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/3b6273ea-649c-4bb4-8ca0-047aa0046706-certificates\") pod \"keda-metrics-apiserver-7c9f485588-wzhmr\" (UID: \"3b6273ea-649c-4bb4-8ca0-047aa0046706\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-wzhmr" Apr 23 17:58:07.872769 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:07.872763 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"temp-vol\" (UniqueName: \"kubernetes.io/empty-dir/3b6273ea-649c-4bb4-8ca0-047aa0046706-temp-vol\") pod \"keda-metrics-apiserver-7c9f485588-wzhmr\" (UID: \"3b6273ea-649c-4bb4-8ca0-047aa0046706\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-wzhmr" Apr 23 17:58:07.872880 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:58:07.872841 2574 secret.go:281] references non-existent secret key: tls.crt Apr 23 17:58:07.872880 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:58:07.872862 2574 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 23 17:58:07.872978 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:58:07.872882 2574 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-wzhmr: references non-existent secret key: tls.crt Apr 23 17:58:07.872978 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:58:07.872934 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6273ea-649c-4bb4-8ca0-047aa0046706-certificates podName:3b6273ea-649c-4bb4-8ca0-047aa0046706 nodeName:}" failed. No retries permitted until 2026-04-23 17:58:08.37291877 +0000 UTC m=+326.270372518 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/3b6273ea-649c-4bb4-8ca0-047aa0046706-certificates") pod "keda-metrics-apiserver-7c9f485588-wzhmr" (UID: "3b6273ea-649c-4bb4-8ca0-047aa0046706") : references non-existent secret key: tls.crt Apr 23 17:58:07.873150 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:07.873123 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"temp-vol\" (UniqueName: \"kubernetes.io/empty-dir/3b6273ea-649c-4bb4-8ca0-047aa0046706-temp-vol\") pod \"keda-metrics-apiserver-7c9f485588-wzhmr\" (UID: \"3b6273ea-649c-4bb4-8ca0-047aa0046706\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-wzhmr" Apr 23 17:58:07.884870 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:07.884843 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-89dtm\" (UniqueName: \"kubernetes.io/projected/3b6273ea-649c-4bb4-8ca0-047aa0046706-kube-api-access-89dtm\") pod \"keda-metrics-apiserver-7c9f485588-wzhmr\" (UID: \"3b6273ea-649c-4bb4-8ca0-047aa0046706\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-wzhmr" Apr 23 17:58:08.013398 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:08.013369 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/keda-admission-cf49989db-gmzv6"] Apr 23 17:58:08.016434 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:08.016418 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-admission-cf49989db-gmzv6" Apr 23 17:58:08.018944 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:08.018918 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-admission-webhooks-certs\"" Apr 23 17:58:08.028028 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:08.028000 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-admission-cf49989db-gmzv6"] Apr 23 17:58:08.074484 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:08.074459 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/1488f8ce-4a38-42c4-bf06-11aecf7277e0-certificates\") pod \"keda-operator-ffbb595cb-6q6bp\" (UID: \"1488f8ce-4a38-42c4-bf06-11aecf7277e0\") " pod="openshift-keda/keda-operator-ffbb595cb-6q6bp" Apr 23 17:58:08.074600 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:08.074507 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/1290562a-df13-4f0c-99ab-dc4c1a8f89a7-certificates\") pod \"keda-admission-cf49989db-gmzv6\" (UID: \"1290562a-df13-4f0c-99ab-dc4c1a8f89a7\") " pod="openshift-keda/keda-admission-cf49989db-gmzv6" Apr 23 17:58:08.074600 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:08.074555 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlkrh\" (UniqueName: \"kubernetes.io/projected/1290562a-df13-4f0c-99ab-dc4c1a8f89a7-kube-api-access-dlkrh\") pod \"keda-admission-cf49989db-gmzv6\" (UID: \"1290562a-df13-4f0c-99ab-dc4c1a8f89a7\") " pod="openshift-keda/keda-admission-cf49989db-gmzv6" Apr 23 17:58:08.074718 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:58:08.074602 2574 secret.go:281] references non-existent secret key: ca.crt Apr 23 17:58:08.074718 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:58:08.074616 2574 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 23 17:58:08.074718 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:58:08.074624 2574 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-6q6bp: references non-existent secret key: ca.crt Apr 23 17:58:08.074718 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:58:08.074676 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1488f8ce-4a38-42c4-bf06-11aecf7277e0-certificates podName:1488f8ce-4a38-42c4-bf06-11aecf7277e0 nodeName:}" failed. No retries permitted until 2026-04-23 17:58:09.074661187 +0000 UTC m=+326.972114934 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/1488f8ce-4a38-42c4-bf06-11aecf7277e0-certificates") pod "keda-operator-ffbb595cb-6q6bp" (UID: "1488f8ce-4a38-42c4-bf06-11aecf7277e0") : references non-existent secret key: ca.crt Apr 23 17:58:08.175815 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:08.175793 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/1290562a-df13-4f0c-99ab-dc4c1a8f89a7-certificates\") pod \"keda-admission-cf49989db-gmzv6\" (UID: \"1290562a-df13-4f0c-99ab-dc4c1a8f89a7\") " pod="openshift-keda/keda-admission-cf49989db-gmzv6" Apr 23 17:58:08.175910 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:08.175843 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dlkrh\" (UniqueName: \"kubernetes.io/projected/1290562a-df13-4f0c-99ab-dc4c1a8f89a7-kube-api-access-dlkrh\") pod \"keda-admission-cf49989db-gmzv6\" (UID: \"1290562a-df13-4f0c-99ab-dc4c1a8f89a7\") " pod="openshift-keda/keda-admission-cf49989db-gmzv6" Apr 23 17:58:08.175950 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:58:08.175932 2574 projected.go:264] Couldn't get secret openshift-keda/keda-admission-webhooks-certs: secret "keda-admission-webhooks-certs" not found Apr 23 17:58:08.175984 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:58:08.175952 2574 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-admission-cf49989db-gmzv6: secret "keda-admission-webhooks-certs" not found Apr 23 17:58:08.176017 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:58:08.176001 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1290562a-df13-4f0c-99ab-dc4c1a8f89a7-certificates podName:1290562a-df13-4f0c-99ab-dc4c1a8f89a7 nodeName:}" failed. No retries permitted until 2026-04-23 17:58:08.675985392 +0000 UTC m=+326.573439153 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/1290562a-df13-4f0c-99ab-dc4c1a8f89a7-certificates") pod "keda-admission-cf49989db-gmzv6" (UID: "1290562a-df13-4f0c-99ab-dc4c1a8f89a7") : secret "keda-admission-webhooks-certs" not found Apr 23 17:58:08.189193 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:08.189169 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlkrh\" (UniqueName: \"kubernetes.io/projected/1290562a-df13-4f0c-99ab-dc4c1a8f89a7-kube-api-access-dlkrh\") pod \"keda-admission-cf49989db-gmzv6\" (UID: \"1290562a-df13-4f0c-99ab-dc4c1a8f89a7\") " pod="openshift-keda/keda-admission-cf49989db-gmzv6" Apr 23 17:58:08.376823 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:08.376766 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/3b6273ea-649c-4bb4-8ca0-047aa0046706-certificates\") pod \"keda-metrics-apiserver-7c9f485588-wzhmr\" (UID: \"3b6273ea-649c-4bb4-8ca0-047aa0046706\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-wzhmr" Apr 23 17:58:08.376908 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:58:08.376867 2574 secret.go:281] references non-existent secret key: tls.crt Apr 23 17:58:08.376908 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:58:08.376878 2574 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 23 17:58:08.376908 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:58:08.376892 2574 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-wzhmr: references non-existent secret key: tls.crt Apr 23 17:58:08.377003 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:58:08.376931 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6273ea-649c-4bb4-8ca0-047aa0046706-certificates podName:3b6273ea-649c-4bb4-8ca0-047aa0046706 nodeName:}" failed. No retries permitted until 2026-04-23 17:58:09.376920747 +0000 UTC m=+327.274374496 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/3b6273ea-649c-4bb4-8ca0-047aa0046706-certificates") pod "keda-metrics-apiserver-7c9f485588-wzhmr" (UID: "3b6273ea-649c-4bb4-8ca0-047aa0046706") : references non-existent secret key: tls.crt Apr 23 17:58:08.678773 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:08.678701 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/1290562a-df13-4f0c-99ab-dc4c1a8f89a7-certificates\") pod \"keda-admission-cf49989db-gmzv6\" (UID: \"1290562a-df13-4f0c-99ab-dc4c1a8f89a7\") " pod="openshift-keda/keda-admission-cf49989db-gmzv6" Apr 23 17:58:08.680955 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:08.680932 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/1290562a-df13-4f0c-99ab-dc4c1a8f89a7-certificates\") pod \"keda-admission-cf49989db-gmzv6\" (UID: \"1290562a-df13-4f0c-99ab-dc4c1a8f89a7\") " pod="openshift-keda/keda-admission-cf49989db-gmzv6" Apr 23 17:58:08.926137 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:08.926098 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-admission-cf49989db-gmzv6" Apr 23 17:58:09.048347 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:09.048319 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-admission-cf49989db-gmzv6"] Apr 23 17:58:09.050895 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:58:09.050869 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1290562a_df13_4f0c_99ab_dc4c1a8f89a7.slice/crio-2dce123b96ce80748743dd166ab91ba7c014886797663692b7252d8c526072b3 WatchSource:0}: Error finding container 2dce123b96ce80748743dd166ab91ba7c014886797663692b7252d8c526072b3: Status 404 returned error can't find the container with id 2dce123b96ce80748743dd166ab91ba7c014886797663692b7252d8c526072b3 Apr 23 17:58:09.081219 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:09.081196 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/1488f8ce-4a38-42c4-bf06-11aecf7277e0-certificates\") pod \"keda-operator-ffbb595cb-6q6bp\" (UID: \"1488f8ce-4a38-42c4-bf06-11aecf7277e0\") " pod="openshift-keda/keda-operator-ffbb595cb-6q6bp" Apr 23 17:58:09.081317 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:58:09.081306 2574 secret.go:281] references non-existent secret key: ca.crt Apr 23 17:58:09.081362 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:58:09.081320 2574 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 23 17:58:09.081362 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:58:09.081327 2574 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-6q6bp: references non-existent secret key: ca.crt Apr 23 17:58:09.081421 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:58:09.081369 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1488f8ce-4a38-42c4-bf06-11aecf7277e0-certificates podName:1488f8ce-4a38-42c4-bf06-11aecf7277e0 nodeName:}" failed. No retries permitted until 2026-04-23 17:58:11.081355964 +0000 UTC m=+328.978809713 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/1488f8ce-4a38-42c4-bf06-11aecf7277e0-certificates") pod "keda-operator-ffbb595cb-6q6bp" (UID: "1488f8ce-4a38-42c4-bf06-11aecf7277e0") : references non-existent secret key: ca.crt Apr 23 17:58:09.383890 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:09.383860 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/3b6273ea-649c-4bb4-8ca0-047aa0046706-certificates\") pod \"keda-metrics-apiserver-7c9f485588-wzhmr\" (UID: \"3b6273ea-649c-4bb4-8ca0-047aa0046706\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-wzhmr" Apr 23 17:58:09.384048 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:58:09.384018 2574 secret.go:281] references non-existent secret key: tls.crt Apr 23 17:58:09.384048 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:58:09.384037 2574 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 23 17:58:09.384117 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:58:09.384056 2574 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-wzhmr: references non-existent secret key: tls.crt Apr 23 17:58:09.384150 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:58:09.384122 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6273ea-649c-4bb4-8ca0-047aa0046706-certificates podName:3b6273ea-649c-4bb4-8ca0-047aa0046706 nodeName:}" failed. No retries permitted until 2026-04-23 17:58:11.384106433 +0000 UTC m=+329.281560181 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/3b6273ea-649c-4bb4-8ca0-047aa0046706-certificates") pod "keda-metrics-apiserver-7c9f485588-wzhmr" (UID: "3b6273ea-649c-4bb4-8ca0-047aa0046706") : references non-existent secret key: tls.crt Apr 23 17:58:09.563075 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:09.563041 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-admission-cf49989db-gmzv6" event={"ID":"1290562a-df13-4f0c-99ab-dc4c1a8f89a7","Type":"ContainerStarted","Data":"2dce123b96ce80748743dd166ab91ba7c014886797663692b7252d8c526072b3"} Apr 23 17:58:11.096739 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:11.096706 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/1488f8ce-4a38-42c4-bf06-11aecf7277e0-certificates\") pod \"keda-operator-ffbb595cb-6q6bp\" (UID: \"1488f8ce-4a38-42c4-bf06-11aecf7277e0\") " pod="openshift-keda/keda-operator-ffbb595cb-6q6bp" Apr 23 17:58:11.099162 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:11.099140 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/1488f8ce-4a38-42c4-bf06-11aecf7277e0-certificates\") pod \"keda-operator-ffbb595cb-6q6bp\" (UID: \"1488f8ce-4a38-42c4-bf06-11aecf7277e0\") " pod="openshift-keda/keda-operator-ffbb595cb-6q6bp" Apr 23 17:58:11.325334 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:11.325311 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-operator-ffbb595cb-6q6bp" Apr 23 17:58:11.399170 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:11.399138 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/3b6273ea-649c-4bb4-8ca0-047aa0046706-certificates\") pod \"keda-metrics-apiserver-7c9f485588-wzhmr\" (UID: \"3b6273ea-649c-4bb4-8ca0-047aa0046706\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-wzhmr" Apr 23 17:58:11.401783 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:11.401749 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/3b6273ea-649c-4bb4-8ca0-047aa0046706-certificates\") pod \"keda-metrics-apiserver-7c9f485588-wzhmr\" (UID: \"3b6273ea-649c-4bb4-8ca0-047aa0046706\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-wzhmr" Apr 23 17:58:11.437953 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:11.437930 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-operator-ffbb595cb-6q6bp"] Apr 23 17:58:11.440273 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:58:11.440245 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1488f8ce_4a38_42c4_bf06_11aecf7277e0.slice/crio-456b55d3605580268c6962f27ee9520e2cc49a03bd9706d0629e7e77217516d3 WatchSource:0}: Error finding container 456b55d3605580268c6962f27ee9520e2cc49a03bd9706d0629e7e77217516d3: Status 404 returned error can't find the container with id 456b55d3605580268c6962f27ee9520e2cc49a03bd9706d0629e7e77217516d3 Apr 23 17:58:11.569037 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:11.569005 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-operator-ffbb595cb-6q6bp" event={"ID":"1488f8ce-4a38-42c4-bf06-11aecf7277e0","Type":"ContainerStarted","Data":"456b55d3605580268c6962f27ee9520e2cc49a03bd9706d0629e7e77217516d3"} Apr 23 17:58:11.570278 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:11.570250 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-admission-cf49989db-gmzv6" event={"ID":"1290562a-df13-4f0c-99ab-dc4c1a8f89a7","Type":"ContainerStarted","Data":"ec680d5ceb66bac20b5473e0e7845f25ebd8a08bfdc16369916a29e67a0f3cc9"} Apr 23 17:58:11.570408 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:11.570398 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-admission-cf49989db-gmzv6" Apr 23 17:58:11.587169 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:11.587126 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/keda-admission-cf49989db-gmzv6" podStartSLOduration=3.023465715 podStartE2EDuration="4.587113082s" podCreationTimestamp="2026-04-23 17:58:07 +0000 UTC" firstStartedPulling="2026-04-23 17:58:09.052400887 +0000 UTC m=+326.949854635" lastFinishedPulling="2026-04-23 17:58:10.616048244 +0000 UTC m=+328.513502002" observedRunningTime="2026-04-23 17:58:11.5859172 +0000 UTC m=+329.483370971" watchObservedRunningTime="2026-04-23 17:58:11.587113082 +0000 UTC m=+329.484566846" Apr 23 17:58:11.604662 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:11.604640 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-wzhmr" Apr 23 17:58:11.716479 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:11.716458 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-metrics-apiserver-7c9f485588-wzhmr"] Apr 23 17:58:11.719258 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:58:11.719233 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6273ea_649c_4bb4_8ca0_047aa0046706.slice/crio-206ec9520a7b8a5452e9277c170672f4d0475381cec8b684e64a66272b168158 WatchSource:0}: Error finding container 206ec9520a7b8a5452e9277c170672f4d0475381cec8b684e64a66272b168158: Status 404 returned error can't find the container with id 206ec9520a7b8a5452e9277c170672f4d0475381cec8b684e64a66272b168158 Apr 23 17:58:12.575209 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:12.575086 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-wzhmr" event={"ID":"3b6273ea-649c-4bb4-8ca0-047aa0046706","Type":"ContainerStarted","Data":"206ec9520a7b8a5452e9277c170672f4d0475381cec8b684e64a66272b168158"} Apr 23 17:58:15.585967 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:15.585879 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-operator-ffbb595cb-6q6bp" event={"ID":"1488f8ce-4a38-42c4-bf06-11aecf7277e0","Type":"ContainerStarted","Data":"7c8e0592cbb87c4ff6c1e69852321d96ad2a50efe56a637918821ef9a264b87b"} Apr 23 17:58:15.585967 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:15.585946 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-operator-ffbb595cb-6q6bp" Apr 23 17:58:15.587203 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:15.587171 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-wzhmr" event={"ID":"3b6273ea-649c-4bb4-8ca0-047aa0046706","Type":"ContainerStarted","Data":"b28b9f6c143eedd6ff43eec7296bca84d0b94fc5cbe795ea2994bc96c9d6b013"} Apr 23 17:58:15.587336 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:15.587324 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-wzhmr" Apr 23 17:58:15.604833 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:15.604784 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/keda-operator-ffbb595cb-6q6bp" podStartSLOduration=4.736754798 podStartE2EDuration="8.604773362s" podCreationTimestamp="2026-04-23 17:58:07 +0000 UTC" firstStartedPulling="2026-04-23 17:58:11.441610265 +0000 UTC m=+329.339064018" lastFinishedPulling="2026-04-23 17:58:15.309628823 +0000 UTC m=+333.207082582" observedRunningTime="2026-04-23 17:58:15.602969836 +0000 UTC m=+333.500423605" watchObservedRunningTime="2026-04-23 17:58:15.604773362 +0000 UTC m=+333.502227132" Apr 23 17:58:15.620484 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:15.620442 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-wzhmr" podStartSLOduration=5.036198685 podStartE2EDuration="8.620432611s" podCreationTimestamp="2026-04-23 17:58:07 +0000 UTC" firstStartedPulling="2026-04-23 17:58:11.720729419 +0000 UTC m=+329.618183181" lastFinishedPulling="2026-04-23 17:58:15.304963356 +0000 UTC m=+333.202417107" observedRunningTime="2026-04-23 17:58:15.619202324 +0000 UTC m=+333.516656094" watchObservedRunningTime="2026-04-23 17:58:15.620432611 +0000 UTC m=+333.517886379" Apr 23 17:58:26.595600 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:26.595512 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-wzhmr" Apr 23 17:58:28.561989 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:28.561961 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-qx85w" Apr 23 17:58:32.577346 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:32.577311 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-admission-cf49989db-gmzv6" Apr 23 17:58:36.593744 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:58:36.593697 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-operator-ffbb595cb-6q6bp" Apr 23 17:59:17.074312 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:17.074283 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/kserve-controller-manager-6fc5d867c5-7t26h"] Apr 23 17:59:17.077511 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:17.077491 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/llmisvc-controller-manager-68cc5db7c4-lfff6"] Apr 23 17:59:17.077672 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:17.077651 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-6fc5d867c5-7t26h" Apr 23 17:59:17.080332 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:17.080305 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"kserve-webhook-server-cert\"" Apr 23 17:59:17.080442 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:17.080423 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"kube-root-ca.crt\"" Apr 23 17:59:17.080522 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:17.080505 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/llmisvc-controller-manager-68cc5db7c4-lfff6" Apr 23 17:59:17.081504 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:17.081489 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"openshift-service-ca.crt\"" Apr 23 17:59:17.081591 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:17.081546 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"kserve-controller-manager-dockercfg-d4nd7\"" Apr 23 17:59:17.083275 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:17.083259 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"llmisvc-webhook-server-cert\"" Apr 23 17:59:17.083275 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:17.083270 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"llmisvc-controller-manager-dockercfg-f4cbc\"" Apr 23 17:59:17.086592 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:17.086547 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-6fc5d867c5-7t26h"] Apr 23 17:59:17.090830 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:17.090812 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/llmisvc-controller-manager-68cc5db7c4-lfff6"] Apr 23 17:59:17.158064 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:17.158035 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f0a0361-6db2-4257-973e-97ed1ac49c93-cert\") pod \"kserve-controller-manager-6fc5d867c5-7t26h\" (UID: \"1f0a0361-6db2-4257-973e-97ed1ac49c93\") " pod="kserve/kserve-controller-manager-6fc5d867c5-7t26h" Apr 23 17:59:17.158199 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:17.158082 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvnwn\" (UniqueName: \"kubernetes.io/projected/f793afa2-cfb2-422f-924a-9992608ca10c-kube-api-access-pvnwn\") pod \"llmisvc-controller-manager-68cc5db7c4-lfff6\" (UID: \"f793afa2-cfb2-422f-924a-9992608ca10c\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-lfff6" Apr 23 17:59:17.158199 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:17.158116 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cwc4\" (UniqueName: \"kubernetes.io/projected/1f0a0361-6db2-4257-973e-97ed1ac49c93-kube-api-access-2cwc4\") pod \"kserve-controller-manager-6fc5d867c5-7t26h\" (UID: \"1f0a0361-6db2-4257-973e-97ed1ac49c93\") " pod="kserve/kserve-controller-manager-6fc5d867c5-7t26h" Apr 23 17:59:17.158199 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:17.158137 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f793afa2-cfb2-422f-924a-9992608ca10c-cert\") pod \"llmisvc-controller-manager-68cc5db7c4-lfff6\" (UID: \"f793afa2-cfb2-422f-924a-9992608ca10c\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-lfff6" Apr 23 17:59:17.259391 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:17.259364 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f0a0361-6db2-4257-973e-97ed1ac49c93-cert\") pod \"kserve-controller-manager-6fc5d867c5-7t26h\" (UID: \"1f0a0361-6db2-4257-973e-97ed1ac49c93\") " pod="kserve/kserve-controller-manager-6fc5d867c5-7t26h" Apr 23 17:59:17.259504 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:17.259408 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pvnwn\" (UniqueName: \"kubernetes.io/projected/f793afa2-cfb2-422f-924a-9992608ca10c-kube-api-access-pvnwn\") pod \"llmisvc-controller-manager-68cc5db7c4-lfff6\" (UID: \"f793afa2-cfb2-422f-924a-9992608ca10c\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-lfff6" Apr 23 17:59:17.259504 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:17.259428 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2cwc4\" (UniqueName: \"kubernetes.io/projected/1f0a0361-6db2-4257-973e-97ed1ac49c93-kube-api-access-2cwc4\") pod \"kserve-controller-manager-6fc5d867c5-7t26h\" (UID: \"1f0a0361-6db2-4257-973e-97ed1ac49c93\") " pod="kserve/kserve-controller-manager-6fc5d867c5-7t26h" Apr 23 17:59:17.259504 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:17.259447 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f793afa2-cfb2-422f-924a-9992608ca10c-cert\") pod \"llmisvc-controller-manager-68cc5db7c4-lfff6\" (UID: \"f793afa2-cfb2-422f-924a-9992608ca10c\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-lfff6" Apr 23 17:59:17.259672 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:59:17.259653 2574 secret.go:189] Couldn't get secret kserve/kserve-webhook-server-cert: secret "kserve-webhook-server-cert" not found Apr 23 17:59:17.259739 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:59:17.259725 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f0a0361-6db2-4257-973e-97ed1ac49c93-cert podName:1f0a0361-6db2-4257-973e-97ed1ac49c93 nodeName:}" failed. No retries permitted until 2026-04-23 17:59:17.759702145 +0000 UTC m=+395.657155913 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1f0a0361-6db2-4257-973e-97ed1ac49c93-cert") pod "kserve-controller-manager-6fc5d867c5-7t26h" (UID: "1f0a0361-6db2-4257-973e-97ed1ac49c93") : secret "kserve-webhook-server-cert" not found Apr 23 17:59:17.261680 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:17.261663 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f793afa2-cfb2-422f-924a-9992608ca10c-cert\") pod \"llmisvc-controller-manager-68cc5db7c4-lfff6\" (UID: \"f793afa2-cfb2-422f-924a-9992608ca10c\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-lfff6" Apr 23 17:59:17.270544 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:17.270525 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cwc4\" (UniqueName: \"kubernetes.io/projected/1f0a0361-6db2-4257-973e-97ed1ac49c93-kube-api-access-2cwc4\") pod \"kserve-controller-manager-6fc5d867c5-7t26h\" (UID: \"1f0a0361-6db2-4257-973e-97ed1ac49c93\") " pod="kserve/kserve-controller-manager-6fc5d867c5-7t26h" Apr 23 17:59:17.270652 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:17.270617 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvnwn\" (UniqueName: \"kubernetes.io/projected/f793afa2-cfb2-422f-924a-9992608ca10c-kube-api-access-pvnwn\") pod \"llmisvc-controller-manager-68cc5db7c4-lfff6\" (UID: \"f793afa2-cfb2-422f-924a-9992608ca10c\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-lfff6" Apr 23 17:59:17.399099 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:17.399036 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/llmisvc-controller-manager-68cc5db7c4-lfff6" Apr 23 17:59:17.516085 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:17.516048 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/llmisvc-controller-manager-68cc5db7c4-lfff6"] Apr 23 17:59:17.518831 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:59:17.518805 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podf793afa2_cfb2_422f_924a_9992608ca10c.slice/crio-2c836e868739f938c49bdac2f37c4dcbb0ff8d1caecf7556b248cd00bfe54a4f WatchSource:0}: Error finding container 2c836e868739f938c49bdac2f37c4dcbb0ff8d1caecf7556b248cd00bfe54a4f: Status 404 returned error can't find the container with id 2c836e868739f938c49bdac2f37c4dcbb0ff8d1caecf7556b248cd00bfe54a4f Apr 23 17:59:17.762074 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:17.762043 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f0a0361-6db2-4257-973e-97ed1ac49c93-cert\") pod \"kserve-controller-manager-6fc5d867c5-7t26h\" (UID: \"1f0a0361-6db2-4257-973e-97ed1ac49c93\") " pod="kserve/kserve-controller-manager-6fc5d867c5-7t26h" Apr 23 17:59:17.762945 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:17.762917 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/llmisvc-controller-manager-68cc5db7c4-lfff6" event={"ID":"f793afa2-cfb2-422f-924a-9992608ca10c","Type":"ContainerStarted","Data":"2c836e868739f938c49bdac2f37c4dcbb0ff8d1caecf7556b248cd00bfe54a4f"} Apr 23 17:59:17.764410 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:17.764389 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f0a0361-6db2-4257-973e-97ed1ac49c93-cert\") pod \"kserve-controller-manager-6fc5d867c5-7t26h\" (UID: \"1f0a0361-6db2-4257-973e-97ed1ac49c93\") " pod="kserve/kserve-controller-manager-6fc5d867c5-7t26h" Apr 23 17:59:17.989219 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:17.989196 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-6fc5d867c5-7t26h" Apr 23 17:59:18.435603 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:18.435553 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-6fc5d867c5-7t26h"] Apr 23 17:59:18.452659 ip-10-0-142-106 kubenswrapper[2574]: W0423 17:59:18.452621 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f0a0361_6db2_4257_973e_97ed1ac49c93.slice/crio-b2422431f97799c72e6e8cca111a4e88e11806abb902a601b7e3846972e349d8 WatchSource:0}: Error finding container b2422431f97799c72e6e8cca111a4e88e11806abb902a601b7e3846972e349d8: Status 404 returned error can't find the container with id b2422431f97799c72e6e8cca111a4e88e11806abb902a601b7e3846972e349d8 Apr 23 17:59:18.767392 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:18.767351 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-6fc5d867c5-7t26h" event={"ID":"1f0a0361-6db2-4257-973e-97ed1ac49c93","Type":"ContainerStarted","Data":"b2422431f97799c72e6e8cca111a4e88e11806abb902a601b7e3846972e349d8"} Apr 23 17:59:19.773586 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:19.773531 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/llmisvc-controller-manager-68cc5db7c4-lfff6" event={"ID":"f793afa2-cfb2-422f-924a-9992608ca10c","Type":"ContainerStarted","Data":"9b61450e6b297d69a5f50d0590470937cc9f12da8f75b17adb8cb84d997fd6cb"} Apr 23 17:59:19.774028 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:19.773693 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/llmisvc-controller-manager-68cc5db7c4-lfff6" Apr 23 17:59:19.794006 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:19.793864 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/llmisvc-controller-manager-68cc5db7c4-lfff6" podStartSLOduration=0.937630498 podStartE2EDuration="2.793847478s" podCreationTimestamp="2026-04-23 17:59:17 +0000 UTC" firstStartedPulling="2026-04-23 17:59:17.519994882 +0000 UTC m=+395.417448630" lastFinishedPulling="2026-04-23 17:59:19.376211858 +0000 UTC m=+397.273665610" observedRunningTime="2026-04-23 17:59:19.792429444 +0000 UTC m=+397.689883229" watchObservedRunningTime="2026-04-23 17:59:19.793847478 +0000 UTC m=+397.691301248" Apr 23 17:59:21.780718 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:21.780681 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-6fc5d867c5-7t26h" event={"ID":"1f0a0361-6db2-4257-973e-97ed1ac49c93","Type":"ContainerStarted","Data":"c250f7ca71890a37c92cd589b1df4bf82f5c122fdf60bb11212062529bed99f6"} Apr 23 17:59:21.781058 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:21.780772 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/kserve-controller-manager-6fc5d867c5-7t26h" Apr 23 17:59:21.800601 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:21.800534 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/kserve-controller-manager-6fc5d867c5-7t26h" podStartSLOduration=1.8271545040000001 podStartE2EDuration="4.800521973s" podCreationTimestamp="2026-04-23 17:59:17 +0000 UTC" firstStartedPulling="2026-04-23 17:59:18.454183095 +0000 UTC m=+396.351636855" lastFinishedPulling="2026-04-23 17:59:21.427550576 +0000 UTC m=+399.325004324" observedRunningTime="2026-04-23 17:59:21.799192198 +0000 UTC m=+399.696645969" watchObservedRunningTime="2026-04-23 17:59:21.800521973 +0000 UTC m=+399.697975744" Apr 23 17:59:50.779135 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:50.779108 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/llmisvc-controller-manager-68cc5db7c4-lfff6" Apr 23 17:59:52.172355 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:52.172256 2574 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve/kserve-controller-manager-6fc5d867c5-7t26h"] Apr 23 17:59:52.172853 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:52.172525 2574 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve/kserve-controller-manager-6fc5d867c5-7t26h" podUID="1f0a0361-6db2-4257-973e-97ed1ac49c93" containerName="manager" containerID="cri-o://c250f7ca71890a37c92cd589b1df4bf82f5c122fdf60bb11212062529bed99f6" gracePeriod=10 Apr 23 17:59:52.178175 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:52.178142 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/kserve-controller-manager-6fc5d867c5-7t26h" Apr 23 17:59:52.402825 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:52.402800 2574 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-6fc5d867c5-7t26h" Apr 23 17:59:52.511957 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:52.511928 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f0a0361-6db2-4257-973e-97ed1ac49c93-cert\") pod \"1f0a0361-6db2-4257-973e-97ed1ac49c93\" (UID: \"1f0a0361-6db2-4257-973e-97ed1ac49c93\") " Apr 23 17:59:52.512079 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:52.512026 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cwc4\" (UniqueName: \"kubernetes.io/projected/1f0a0361-6db2-4257-973e-97ed1ac49c93-kube-api-access-2cwc4\") pod \"1f0a0361-6db2-4257-973e-97ed1ac49c93\" (UID: \"1f0a0361-6db2-4257-973e-97ed1ac49c93\") " Apr 23 17:59:52.513974 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:52.513947 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f0a0361-6db2-4257-973e-97ed1ac49c93-cert" (OuterVolumeSpecName: "cert") pod "1f0a0361-6db2-4257-973e-97ed1ac49c93" (UID: "1f0a0361-6db2-4257-973e-97ed1ac49c93"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:59:52.514108 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:52.514067 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f0a0361-6db2-4257-973e-97ed1ac49c93-kube-api-access-2cwc4" (OuterVolumeSpecName: "kube-api-access-2cwc4") pod "1f0a0361-6db2-4257-973e-97ed1ac49c93" (UID: "1f0a0361-6db2-4257-973e-97ed1ac49c93"). InnerVolumeSpecName "kube-api-access-2cwc4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 17:59:52.612837 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:52.612814 2574 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f0a0361-6db2-4257-973e-97ed1ac49c93-cert\") on node \"ip-10-0-142-106.ec2.internal\" DevicePath \"\"" Apr 23 17:59:52.612837 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:52.612835 2574 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2cwc4\" (UniqueName: \"kubernetes.io/projected/1f0a0361-6db2-4257-973e-97ed1ac49c93-kube-api-access-2cwc4\") on node \"ip-10-0-142-106.ec2.internal\" DevicePath \"\"" Apr 23 17:59:52.880625 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:52.880542 2574 generic.go:358] "Generic (PLEG): container finished" podID="1f0a0361-6db2-4257-973e-97ed1ac49c93" containerID="c250f7ca71890a37c92cd589b1df4bf82f5c122fdf60bb11212062529bed99f6" exitCode=0 Apr 23 17:59:52.880625 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:52.880605 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-6fc5d867c5-7t26h" event={"ID":"1f0a0361-6db2-4257-973e-97ed1ac49c93","Type":"ContainerDied","Data":"c250f7ca71890a37c92cd589b1df4bf82f5c122fdf60bb11212062529bed99f6"} Apr 23 17:59:52.880768 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:52.880624 2574 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-6fc5d867c5-7t26h" Apr 23 17:59:52.880768 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:52.880639 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-6fc5d867c5-7t26h" event={"ID":"1f0a0361-6db2-4257-973e-97ed1ac49c93","Type":"ContainerDied","Data":"b2422431f97799c72e6e8cca111a4e88e11806abb902a601b7e3846972e349d8"} Apr 23 17:59:52.880768 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:52.880657 2574 scope.go:117] "RemoveContainer" containerID="c250f7ca71890a37c92cd589b1df4bf82f5c122fdf60bb11212062529bed99f6" Apr 23 17:59:52.888117 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:52.888100 2574 scope.go:117] "RemoveContainer" containerID="c250f7ca71890a37c92cd589b1df4bf82f5c122fdf60bb11212062529bed99f6" Apr 23 17:59:52.888331 ip-10-0-142-106 kubenswrapper[2574]: E0423 17:59:52.888314 2574 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c250f7ca71890a37c92cd589b1df4bf82f5c122fdf60bb11212062529bed99f6\": container with ID starting with c250f7ca71890a37c92cd589b1df4bf82f5c122fdf60bb11212062529bed99f6 not found: ID does not exist" containerID="c250f7ca71890a37c92cd589b1df4bf82f5c122fdf60bb11212062529bed99f6" Apr 23 17:59:52.888378 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:52.888338 2574 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c250f7ca71890a37c92cd589b1df4bf82f5c122fdf60bb11212062529bed99f6"} err="failed to get container status \"c250f7ca71890a37c92cd589b1df4bf82f5c122fdf60bb11212062529bed99f6\": rpc error: code = NotFound desc = could not find container \"c250f7ca71890a37c92cd589b1df4bf82f5c122fdf60bb11212062529bed99f6\": container with ID starting with c250f7ca71890a37c92cd589b1df4bf82f5c122fdf60bb11212062529bed99f6 not found: ID does not exist" Apr 23 17:59:52.899063 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:52.899041 2574 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve/kserve-controller-manager-6fc5d867c5-7t26h"] Apr 23 17:59:52.905417 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:52.905398 2574 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve/kserve-controller-manager-6fc5d867c5-7t26h"] Apr 23 17:59:54.645246 ip-10-0-142-106 kubenswrapper[2574]: I0423 17:59:54.645211 2574 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f0a0361-6db2-4257-973e-97ed1ac49c93" path="/var/lib/kubelet/pods/1f0a0361-6db2-4257-973e-97ed1ac49c93/volumes" Apr 23 18:00:27.645779 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:27.645742 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/model-serving-api-86f7b4b499-rl6jq"] Apr 23 18:00:27.646211 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:27.646026 2574 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1f0a0361-6db2-4257-973e-97ed1ac49c93" containerName="manager" Apr 23 18:00:27.646211 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:27.646037 2574 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f0a0361-6db2-4257-973e-97ed1ac49c93" containerName="manager" Apr 23 18:00:27.646211 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:27.646088 2574 memory_manager.go:356] "RemoveStaleState removing state" podUID="1f0a0361-6db2-4257-973e-97ed1ac49c93" containerName="manager" Apr 23 18:00:27.648784 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:27.648768 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/model-serving-api-86f7b4b499-rl6jq" Apr 23 18:00:27.651128 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:27.651106 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"model-serving-api-tls\"" Apr 23 18:00:27.651284 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:27.651168 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"model-serving-api-dockercfg-rw9vp\"" Apr 23 18:00:27.658891 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:27.658867 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/model-serving-api-86f7b4b499-rl6jq"] Apr 23 18:00:27.661069 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:27.661045 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/odh-model-controller-696fc77849-z46vh"] Apr 23 18:00:27.664089 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:27.664071 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/odh-model-controller-696fc77849-z46vh" Apr 23 18:00:27.666460 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:27.666441 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"odh-model-controller-webhook-cert\"" Apr 23 18:00:27.666543 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:27.666449 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"odh-model-controller-dockercfg-jf4ld\"" Apr 23 18:00:27.674184 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:27.674163 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/odh-model-controller-696fc77849-z46vh"] Apr 23 18:00:27.759115 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:27.759092 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trv9c\" (UniqueName: \"kubernetes.io/projected/96e2ad04-9dfa-4fb4-99b1-e68969c34d0f-kube-api-access-trv9c\") pod \"model-serving-api-86f7b4b499-rl6jq\" (UID: \"96e2ad04-9dfa-4fb4-99b1-e68969c34d0f\") " pod="kserve/model-serving-api-86f7b4b499-rl6jq" Apr 23 18:00:27.759236 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:27.759127 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e8b66bea-9e43-4836-96c0-d927a3187933-cert\") pod \"odh-model-controller-696fc77849-z46vh\" (UID: \"e8b66bea-9e43-4836-96c0-d927a3187933\") " pod="kserve/odh-model-controller-696fc77849-z46vh" Apr 23 18:00:27.759236 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:27.759150 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certs\" (UniqueName: \"kubernetes.io/secret/96e2ad04-9dfa-4fb4-99b1-e68969c34d0f-tls-certs\") pod \"model-serving-api-86f7b4b499-rl6jq\" (UID: \"96e2ad04-9dfa-4fb4-99b1-e68969c34d0f\") " pod="kserve/model-serving-api-86f7b4b499-rl6jq" Apr 23 18:00:27.759320 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:27.759231 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7cls\" (UniqueName: \"kubernetes.io/projected/e8b66bea-9e43-4836-96c0-d927a3187933-kube-api-access-h7cls\") pod \"odh-model-controller-696fc77849-z46vh\" (UID: \"e8b66bea-9e43-4836-96c0-d927a3187933\") " pod="kserve/odh-model-controller-696fc77849-z46vh" Apr 23 18:00:27.859689 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:27.859663 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-certs\" (UniqueName: \"kubernetes.io/secret/96e2ad04-9dfa-4fb4-99b1-e68969c34d0f-tls-certs\") pod \"model-serving-api-86f7b4b499-rl6jq\" (UID: \"96e2ad04-9dfa-4fb4-99b1-e68969c34d0f\") " pod="kserve/model-serving-api-86f7b4b499-rl6jq" Apr 23 18:00:27.859786 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:27.859716 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h7cls\" (UniqueName: \"kubernetes.io/projected/e8b66bea-9e43-4836-96c0-d927a3187933-kube-api-access-h7cls\") pod \"odh-model-controller-696fc77849-z46vh\" (UID: \"e8b66bea-9e43-4836-96c0-d927a3187933\") " pod="kserve/odh-model-controller-696fc77849-z46vh" Apr 23 18:00:27.859823 ip-10-0-142-106 kubenswrapper[2574]: E0423 18:00:27.859808 2574 secret.go:189] Couldn't get secret kserve/model-serving-api-tls: secret "model-serving-api-tls" not found Apr 23 18:00:27.859881 ip-10-0-142-106 kubenswrapper[2574]: E0423 18:00:27.859871 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96e2ad04-9dfa-4fb4-99b1-e68969c34d0f-tls-certs podName:96e2ad04-9dfa-4fb4-99b1-e68969c34d0f nodeName:}" failed. No retries permitted until 2026-04-23 18:00:28.359854964 +0000 UTC m=+466.257308719 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certs" (UniqueName: "kubernetes.io/secret/96e2ad04-9dfa-4fb4-99b1-e68969c34d0f-tls-certs") pod "model-serving-api-86f7b4b499-rl6jq" (UID: "96e2ad04-9dfa-4fb4-99b1-e68969c34d0f") : secret "model-serving-api-tls" not found Apr 23 18:00:27.859988 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:27.859892 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-trv9c\" (UniqueName: \"kubernetes.io/projected/96e2ad04-9dfa-4fb4-99b1-e68969c34d0f-kube-api-access-trv9c\") pod \"model-serving-api-86f7b4b499-rl6jq\" (UID: \"96e2ad04-9dfa-4fb4-99b1-e68969c34d0f\") " pod="kserve/model-serving-api-86f7b4b499-rl6jq" Apr 23 18:00:27.859988 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:27.859926 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e8b66bea-9e43-4836-96c0-d927a3187933-cert\") pod \"odh-model-controller-696fc77849-z46vh\" (UID: \"e8b66bea-9e43-4836-96c0-d927a3187933\") " pod="kserve/odh-model-controller-696fc77849-z46vh" Apr 23 18:00:27.862220 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:27.862197 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e8b66bea-9e43-4836-96c0-d927a3187933-cert\") pod \"odh-model-controller-696fc77849-z46vh\" (UID: \"e8b66bea-9e43-4836-96c0-d927a3187933\") " pod="kserve/odh-model-controller-696fc77849-z46vh" Apr 23 18:00:27.873535 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:27.873516 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-trv9c\" (UniqueName: \"kubernetes.io/projected/96e2ad04-9dfa-4fb4-99b1-e68969c34d0f-kube-api-access-trv9c\") pod \"model-serving-api-86f7b4b499-rl6jq\" (UID: \"96e2ad04-9dfa-4fb4-99b1-e68969c34d0f\") " pod="kserve/model-serving-api-86f7b4b499-rl6jq" Apr 23 18:00:27.873733 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:27.873714 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7cls\" (UniqueName: \"kubernetes.io/projected/e8b66bea-9e43-4836-96c0-d927a3187933-kube-api-access-h7cls\") pod \"odh-model-controller-696fc77849-z46vh\" (UID: \"e8b66bea-9e43-4836-96c0-d927a3187933\") " pod="kserve/odh-model-controller-696fc77849-z46vh" Apr 23 18:00:27.973918 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:27.973899 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/odh-model-controller-696fc77849-z46vh" Apr 23 18:00:28.096270 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:28.096243 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/odh-model-controller-696fc77849-z46vh"] Apr 23 18:00:28.098634 ip-10-0-142-106 kubenswrapper[2574]: W0423 18:00:28.098602 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8b66bea_9e43_4836_96c0_d927a3187933.slice/crio-d012c39af329dd86867151f0e494904da1adb6d9472e1abd784a69a6f2b7e758 WatchSource:0}: Error finding container d012c39af329dd86867151f0e494904da1adb6d9472e1abd784a69a6f2b7e758: Status 404 returned error can't find the container with id d012c39af329dd86867151f0e494904da1adb6d9472e1abd784a69a6f2b7e758 Apr 23 18:00:28.364442 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:28.364374 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-certs\" (UniqueName: \"kubernetes.io/secret/96e2ad04-9dfa-4fb4-99b1-e68969c34d0f-tls-certs\") pod \"model-serving-api-86f7b4b499-rl6jq\" (UID: \"96e2ad04-9dfa-4fb4-99b1-e68969c34d0f\") " pod="kserve/model-serving-api-86f7b4b499-rl6jq" Apr 23 18:00:28.366547 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:28.366525 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-certs\" (UniqueName: \"kubernetes.io/secret/96e2ad04-9dfa-4fb4-99b1-e68969c34d0f-tls-certs\") pod \"model-serving-api-86f7b4b499-rl6jq\" (UID: \"96e2ad04-9dfa-4fb4-99b1-e68969c34d0f\") " pod="kserve/model-serving-api-86f7b4b499-rl6jq" Apr 23 18:00:28.559657 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:28.559636 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/model-serving-api-86f7b4b499-rl6jq" Apr 23 18:00:28.680038 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:28.680016 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/model-serving-api-86f7b4b499-rl6jq"] Apr 23 18:00:28.681818 ip-10-0-142-106 kubenswrapper[2574]: W0423 18:00:28.681791 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96e2ad04_9dfa_4fb4_99b1_e68969c34d0f.slice/crio-cd4a57da4c10b2bc3230dd357b0f58a3acab1070445a3e6137c288b60813f5b9 WatchSource:0}: Error finding container cd4a57da4c10b2bc3230dd357b0f58a3acab1070445a3e6137c288b60813f5b9: Status 404 returned error can't find the container with id cd4a57da4c10b2bc3230dd357b0f58a3acab1070445a3e6137c288b60813f5b9 Apr 23 18:00:29.004447 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:29.004414 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/model-serving-api-86f7b4b499-rl6jq" event={"ID":"96e2ad04-9dfa-4fb4-99b1-e68969c34d0f","Type":"ContainerStarted","Data":"cd4a57da4c10b2bc3230dd357b0f58a3acab1070445a3e6137c288b60813f5b9"} Apr 23 18:00:29.005278 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:29.005254 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/odh-model-controller-696fc77849-z46vh" event={"ID":"e8b66bea-9e43-4836-96c0-d927a3187933","Type":"ContainerStarted","Data":"d012c39af329dd86867151f0e494904da1adb6d9472e1abd784a69a6f2b7e758"} Apr 23 18:00:32.016776 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:32.016737 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/model-serving-api-86f7b4b499-rl6jq" event={"ID":"96e2ad04-9dfa-4fb4-99b1-e68969c34d0f","Type":"ContainerStarted","Data":"66b1c4e4bef96828675d3cfd4e8009af35c04b30fb50dfc322cb7a433c93d939"} Apr 23 18:00:32.017216 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:32.016860 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/model-serving-api-86f7b4b499-rl6jq" Apr 23 18:00:32.018021 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:32.017996 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/odh-model-controller-696fc77849-z46vh" event={"ID":"e8b66bea-9e43-4836-96c0-d927a3187933","Type":"ContainerStarted","Data":"c3f409661ed05bef7d53283a56620c30dceb76a1dad0533c0eb465652b325243"} Apr 23 18:00:32.018150 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:32.018129 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/odh-model-controller-696fc77849-z46vh" Apr 23 18:00:32.035585 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:32.035534 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/model-serving-api-86f7b4b499-rl6jq" podStartSLOduration=2.214220645 podStartE2EDuration="5.035522887s" podCreationTimestamp="2026-04-23 18:00:27 +0000 UTC" firstStartedPulling="2026-04-23 18:00:28.683463975 +0000 UTC m=+466.580917722" lastFinishedPulling="2026-04-23 18:00:31.504766207 +0000 UTC m=+469.402219964" observedRunningTime="2026-04-23 18:00:32.033306963 +0000 UTC m=+469.930760734" watchObservedRunningTime="2026-04-23 18:00:32.035522887 +0000 UTC m=+469.932976691" Apr 23 18:00:32.050024 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:32.049981 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/odh-model-controller-696fc77849-z46vh" podStartSLOduration=1.597585018 podStartE2EDuration="5.049970143s" podCreationTimestamp="2026-04-23 18:00:27 +0000 UTC" firstStartedPulling="2026-04-23 18:00:28.099776302 +0000 UTC m=+465.997230050" lastFinishedPulling="2026-04-23 18:00:31.552161422 +0000 UTC m=+469.449615175" observedRunningTime="2026-04-23 18:00:32.048649765 +0000 UTC m=+469.946103535" watchObservedRunningTime="2026-04-23 18:00:32.049970143 +0000 UTC m=+469.947423913" Apr 23 18:00:43.023520 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:43.023485 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/odh-model-controller-696fc77849-z46vh" Apr 23 18:00:43.025368 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:43.025345 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/model-serving-api-86f7b4b499-rl6jq" Apr 23 18:00:57.658232 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:57.658197 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/seaweedfs-tls-custom-ddd4dbfd-prrh4"] Apr 23 18:00:57.661530 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:57.661509 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/seaweedfs-tls-custom-ddd4dbfd-prrh4" Apr 23 18:00:57.664081 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:57.664061 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"seaweedfs-tls-custom-artifact\"" Apr 23 18:00:57.664172 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:57.664090 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"default-dockercfg-8fzpf\"" Apr 23 18:00:57.670030 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:57.670006 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/seaweedfs-tls-custom-ddd4dbfd-prrh4"] Apr 23 18:00:57.762033 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:57.761990 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xlfb\" (UniqueName: \"kubernetes.io/projected/26278a4e-39cb-4faf-b69f-cba896668a3d-kube-api-access-7xlfb\") pod \"seaweedfs-tls-custom-ddd4dbfd-prrh4\" (UID: \"26278a4e-39cb-4faf-b69f-cba896668a3d\") " pod="kserve/seaweedfs-tls-custom-ddd4dbfd-prrh4" Apr 23 18:00:57.762202 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:57.762045 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/26278a4e-39cb-4faf-b69f-cba896668a3d-data\") pod \"seaweedfs-tls-custom-ddd4dbfd-prrh4\" (UID: \"26278a4e-39cb-4faf-b69f-cba896668a3d\") " pod="kserve/seaweedfs-tls-custom-ddd4dbfd-prrh4" Apr 23 18:00:57.862385 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:57.862343 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7xlfb\" (UniqueName: \"kubernetes.io/projected/26278a4e-39cb-4faf-b69f-cba896668a3d-kube-api-access-7xlfb\") pod \"seaweedfs-tls-custom-ddd4dbfd-prrh4\" (UID: \"26278a4e-39cb-4faf-b69f-cba896668a3d\") " pod="kserve/seaweedfs-tls-custom-ddd4dbfd-prrh4" Apr 23 18:00:57.862547 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:57.862396 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/26278a4e-39cb-4faf-b69f-cba896668a3d-data\") pod \"seaweedfs-tls-custom-ddd4dbfd-prrh4\" (UID: \"26278a4e-39cb-4faf-b69f-cba896668a3d\") " pod="kserve/seaweedfs-tls-custom-ddd4dbfd-prrh4" Apr 23 18:00:57.862858 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:57.862834 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/26278a4e-39cb-4faf-b69f-cba896668a3d-data\") pod \"seaweedfs-tls-custom-ddd4dbfd-prrh4\" (UID: \"26278a4e-39cb-4faf-b69f-cba896668a3d\") " pod="kserve/seaweedfs-tls-custom-ddd4dbfd-prrh4" Apr 23 18:00:57.871919 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:57.871890 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xlfb\" (UniqueName: \"kubernetes.io/projected/26278a4e-39cb-4faf-b69f-cba896668a3d-kube-api-access-7xlfb\") pod \"seaweedfs-tls-custom-ddd4dbfd-prrh4\" (UID: \"26278a4e-39cb-4faf-b69f-cba896668a3d\") " pod="kserve/seaweedfs-tls-custom-ddd4dbfd-prrh4" Apr 23 18:00:57.972848 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:57.972823 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/seaweedfs-tls-custom-ddd4dbfd-prrh4" Apr 23 18:00:58.089560 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:58.089443 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/seaweedfs-tls-custom-ddd4dbfd-prrh4"] Apr 23 18:00:58.092115 ip-10-0-142-106 kubenswrapper[2574]: W0423 18:00:58.092088 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26278a4e_39cb_4faf_b69f_cba896668a3d.slice/crio-fdbc20208dcef2738e59732a0c61cb2aed22ceb201dc4b5927a6f3b09a50a23e WatchSource:0}: Error finding container fdbc20208dcef2738e59732a0c61cb2aed22ceb201dc4b5927a6f3b09a50a23e: Status 404 returned error can't find the container with id fdbc20208dcef2738e59732a0c61cb2aed22ceb201dc4b5927a6f3b09a50a23e Apr 23 18:00:58.100475 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:00:58.100447 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/seaweedfs-tls-custom-ddd4dbfd-prrh4" event={"ID":"26278a4e-39cb-4faf-b69f-cba896668a3d","Type":"ContainerStarted","Data":"fdbc20208dcef2738e59732a0c61cb2aed22ceb201dc4b5927a6f3b09a50a23e"} Apr 23 18:01:26.189584 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:26.189533 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/seaweedfs-tls-custom-ddd4dbfd-prrh4" event={"ID":"26278a4e-39cb-4faf-b69f-cba896668a3d","Type":"ContainerStarted","Data":"004f950b7d02e47b55bbec8b1185644097516d7664ce10ffbe19cf389de43e0a"} Apr 23 18:01:26.207918 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:26.207864 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/seaweedfs-tls-custom-ddd4dbfd-prrh4" podStartSLOduration=1.760106583 podStartE2EDuration="29.207853692s" podCreationTimestamp="2026-04-23 18:00:57 +0000 UTC" firstStartedPulling="2026-04-23 18:00:58.093410395 +0000 UTC m=+495.990864158" lastFinishedPulling="2026-04-23 18:01:25.54115751 +0000 UTC m=+523.438611267" observedRunningTime="2026-04-23 18:01:26.20591737 +0000 UTC m=+524.103371152" watchObservedRunningTime="2026-04-23 18:01:26.207853692 +0000 UTC m=+524.105307463" Apr 23 18:01:27.198192 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:27.198155 2574 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve/seaweedfs-tls-custom-ddd4dbfd-prrh4"] Apr 23 18:01:28.196248 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:28.196207 2574 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve/seaweedfs-tls-custom-ddd4dbfd-prrh4" podUID="26278a4e-39cb-4faf-b69f-cba896668a3d" containerName="seaweedfs-tls-custom" containerID="cri-o://004f950b7d02e47b55bbec8b1185644097516d7664ce10ffbe19cf389de43e0a" gracePeriod=30 Apr 23 18:01:29.432062 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:29.432037 2574 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/seaweedfs-tls-custom-ddd4dbfd-prrh4" Apr 23 18:01:29.496002 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:29.495978 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xlfb\" (UniqueName: \"kubernetes.io/projected/26278a4e-39cb-4faf-b69f-cba896668a3d-kube-api-access-7xlfb\") pod \"26278a4e-39cb-4faf-b69f-cba896668a3d\" (UID: \"26278a4e-39cb-4faf-b69f-cba896668a3d\") " Apr 23 18:01:29.496122 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:29.496065 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/26278a4e-39cb-4faf-b69f-cba896668a3d-data\") pod \"26278a4e-39cb-4faf-b69f-cba896668a3d\" (UID: \"26278a4e-39cb-4faf-b69f-cba896668a3d\") " Apr 23 18:01:29.497247 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:29.497215 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26278a4e-39cb-4faf-b69f-cba896668a3d-data" (OuterVolumeSpecName: "data") pod "26278a4e-39cb-4faf-b69f-cba896668a3d" (UID: "26278a4e-39cb-4faf-b69f-cba896668a3d"). InnerVolumeSpecName "data". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 23 18:01:29.497958 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:29.497934 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26278a4e-39cb-4faf-b69f-cba896668a3d-kube-api-access-7xlfb" (OuterVolumeSpecName: "kube-api-access-7xlfb") pod "26278a4e-39cb-4faf-b69f-cba896668a3d" (UID: "26278a4e-39cb-4faf-b69f-cba896668a3d"). InnerVolumeSpecName "kube-api-access-7xlfb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:01:29.596777 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:29.596751 2574 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/26278a4e-39cb-4faf-b69f-cba896668a3d-data\") on node \"ip-10-0-142-106.ec2.internal\" DevicePath \"\"" Apr 23 18:01:29.596777 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:29.596773 2574 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7xlfb\" (UniqueName: \"kubernetes.io/projected/26278a4e-39cb-4faf-b69f-cba896668a3d-kube-api-access-7xlfb\") on node \"ip-10-0-142-106.ec2.internal\" DevicePath \"\"" Apr 23 18:01:30.202612 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:30.202551 2574 generic.go:358] "Generic (PLEG): container finished" podID="26278a4e-39cb-4faf-b69f-cba896668a3d" containerID="004f950b7d02e47b55bbec8b1185644097516d7664ce10ffbe19cf389de43e0a" exitCode=0 Apr 23 18:01:30.202776 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:30.202629 2574 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/seaweedfs-tls-custom-ddd4dbfd-prrh4" Apr 23 18:01:30.202776 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:30.202637 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/seaweedfs-tls-custom-ddd4dbfd-prrh4" event={"ID":"26278a4e-39cb-4faf-b69f-cba896668a3d","Type":"ContainerDied","Data":"004f950b7d02e47b55bbec8b1185644097516d7664ce10ffbe19cf389de43e0a"} Apr 23 18:01:30.202776 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:30.202680 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/seaweedfs-tls-custom-ddd4dbfd-prrh4" event={"ID":"26278a4e-39cb-4faf-b69f-cba896668a3d","Type":"ContainerDied","Data":"fdbc20208dcef2738e59732a0c61cb2aed22ceb201dc4b5927a6f3b09a50a23e"} Apr 23 18:01:30.202776 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:30.202698 2574 scope.go:117] "RemoveContainer" containerID="004f950b7d02e47b55bbec8b1185644097516d7664ce10ffbe19cf389de43e0a" Apr 23 18:01:30.211523 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:30.211506 2574 scope.go:117] "RemoveContainer" containerID="004f950b7d02e47b55bbec8b1185644097516d7664ce10ffbe19cf389de43e0a" Apr 23 18:01:30.211782 ip-10-0-142-106 kubenswrapper[2574]: E0423 18:01:30.211762 2574 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"004f950b7d02e47b55bbec8b1185644097516d7664ce10ffbe19cf389de43e0a\": container with ID starting with 004f950b7d02e47b55bbec8b1185644097516d7664ce10ffbe19cf389de43e0a not found: ID does not exist" containerID="004f950b7d02e47b55bbec8b1185644097516d7664ce10ffbe19cf389de43e0a" Apr 23 18:01:30.211872 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:30.211787 2574 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"004f950b7d02e47b55bbec8b1185644097516d7664ce10ffbe19cf389de43e0a"} err="failed to get container status \"004f950b7d02e47b55bbec8b1185644097516d7664ce10ffbe19cf389de43e0a\": rpc error: code = NotFound desc = could not find container \"004f950b7d02e47b55bbec8b1185644097516d7664ce10ffbe19cf389de43e0a\": container with ID starting with 004f950b7d02e47b55bbec8b1185644097516d7664ce10ffbe19cf389de43e0a not found: ID does not exist" Apr 23 18:01:30.225215 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:30.225193 2574 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve/seaweedfs-tls-custom-ddd4dbfd-prrh4"] Apr 23 18:01:30.229892 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:30.229875 2574 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve/seaweedfs-tls-custom-ddd4dbfd-prrh4"] Apr 23 18:01:30.644629 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:30.644600 2574 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26278a4e-39cb-4faf-b69f-cba896668a3d" path="/var/lib/kubelet/pods/26278a4e-39cb-4faf-b69f-cba896668a3d/volumes" Apr 23 18:01:41.644746 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:41.644711 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/seaweedfs-tls-serving-7fd5766db9-qtc9m"] Apr 23 18:01:41.645108 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:41.645000 2574 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="26278a4e-39cb-4faf-b69f-cba896668a3d" containerName="seaweedfs-tls-custom" Apr 23 18:01:41.645108 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:41.645011 2574 state_mem.go:107] "Deleted CPUSet assignment" podUID="26278a4e-39cb-4faf-b69f-cba896668a3d" containerName="seaweedfs-tls-custom" Apr 23 18:01:41.645108 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:41.645057 2574 memory_manager.go:356] "RemoveStaleState removing state" podUID="26278a4e-39cb-4faf-b69f-cba896668a3d" containerName="seaweedfs-tls-custom" Apr 23 18:01:41.650916 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:41.650898 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/seaweedfs-tls-serving-7fd5766db9-qtc9m" Apr 23 18:01:41.656748 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:41.656714 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"seaweedfs-tls-serving\"" Apr 23 18:01:41.656900 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:41.656736 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"seaweedfs-tls-serving-artifact\"" Apr 23 18:01:41.656900 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:41.656817 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"default-dockercfg-8fzpf\"" Apr 23 18:01:41.659757 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:41.659731 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/seaweedfs-tls-serving-7fd5766db9-qtc9m"] Apr 23 18:01:41.787763 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:41.787734 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgh6v\" (UniqueName: \"kubernetes.io/projected/b493e048-e303-4172-9840-2e03434d0484-kube-api-access-tgh6v\") pod \"seaweedfs-tls-serving-7fd5766db9-qtc9m\" (UID: \"b493e048-e303-4172-9840-2e03434d0484\") " pod="kserve/seaweedfs-tls-serving-7fd5766db9-qtc9m" Apr 23 18:01:41.787893 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:41.787796 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"seaweedfs-tls-serving\" (UniqueName: \"kubernetes.io/projected/b493e048-e303-4172-9840-2e03434d0484-seaweedfs-tls-serving\") pod \"seaweedfs-tls-serving-7fd5766db9-qtc9m\" (UID: \"b493e048-e303-4172-9840-2e03434d0484\") " pod="kserve/seaweedfs-tls-serving-7fd5766db9-qtc9m" Apr 23 18:01:41.787893 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:41.787837 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/b493e048-e303-4172-9840-2e03434d0484-data\") pod \"seaweedfs-tls-serving-7fd5766db9-qtc9m\" (UID: \"b493e048-e303-4172-9840-2e03434d0484\") " pod="kserve/seaweedfs-tls-serving-7fd5766db9-qtc9m" Apr 23 18:01:41.888620 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:41.888589 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"seaweedfs-tls-serving\" (UniqueName: \"kubernetes.io/projected/b493e048-e303-4172-9840-2e03434d0484-seaweedfs-tls-serving\") pod \"seaweedfs-tls-serving-7fd5766db9-qtc9m\" (UID: \"b493e048-e303-4172-9840-2e03434d0484\") " pod="kserve/seaweedfs-tls-serving-7fd5766db9-qtc9m" Apr 23 18:01:41.888748 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:41.888637 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/b493e048-e303-4172-9840-2e03434d0484-data\") pod \"seaweedfs-tls-serving-7fd5766db9-qtc9m\" (UID: \"b493e048-e303-4172-9840-2e03434d0484\") " pod="kserve/seaweedfs-tls-serving-7fd5766db9-qtc9m" Apr 23 18:01:41.888748 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:41.888685 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tgh6v\" (UniqueName: \"kubernetes.io/projected/b493e048-e303-4172-9840-2e03434d0484-kube-api-access-tgh6v\") pod \"seaweedfs-tls-serving-7fd5766db9-qtc9m\" (UID: \"b493e048-e303-4172-9840-2e03434d0484\") " pod="kserve/seaweedfs-tls-serving-7fd5766db9-qtc9m" Apr 23 18:01:41.888748 ip-10-0-142-106 kubenswrapper[2574]: E0423 18:01:41.888713 2574 projected.go:264] Couldn't get secret kserve/seaweedfs-tls-serving: secret "seaweedfs-tls-serving" not found Apr 23 18:01:41.888748 ip-10-0-142-106 kubenswrapper[2574]: E0423 18:01:41.888732 2574 projected.go:194] Error preparing data for projected volume seaweedfs-tls-serving for pod kserve/seaweedfs-tls-serving-7fd5766db9-qtc9m: secret "seaweedfs-tls-serving" not found Apr 23 18:01:41.888939 ip-10-0-142-106 kubenswrapper[2574]: E0423 18:01:41.888787 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b493e048-e303-4172-9840-2e03434d0484-seaweedfs-tls-serving podName:b493e048-e303-4172-9840-2e03434d0484 nodeName:}" failed. No retries permitted until 2026-04-23 18:01:42.388771357 +0000 UTC m=+540.286225104 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "seaweedfs-tls-serving" (UniqueName: "kubernetes.io/projected/b493e048-e303-4172-9840-2e03434d0484-seaweedfs-tls-serving") pod "seaweedfs-tls-serving-7fd5766db9-qtc9m" (UID: "b493e048-e303-4172-9840-2e03434d0484") : secret "seaweedfs-tls-serving" not found Apr 23 18:01:41.889001 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:41.888964 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/b493e048-e303-4172-9840-2e03434d0484-data\") pod \"seaweedfs-tls-serving-7fd5766db9-qtc9m\" (UID: \"b493e048-e303-4172-9840-2e03434d0484\") " pod="kserve/seaweedfs-tls-serving-7fd5766db9-qtc9m" Apr 23 18:01:41.898607 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:41.898553 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgh6v\" (UniqueName: \"kubernetes.io/projected/b493e048-e303-4172-9840-2e03434d0484-kube-api-access-tgh6v\") pod \"seaweedfs-tls-serving-7fd5766db9-qtc9m\" (UID: \"b493e048-e303-4172-9840-2e03434d0484\") " pod="kserve/seaweedfs-tls-serving-7fd5766db9-qtc9m" Apr 23 18:01:42.393969 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:42.393931 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"seaweedfs-tls-serving\" (UniqueName: \"kubernetes.io/projected/b493e048-e303-4172-9840-2e03434d0484-seaweedfs-tls-serving\") pod \"seaweedfs-tls-serving-7fd5766db9-qtc9m\" (UID: \"b493e048-e303-4172-9840-2e03434d0484\") " pod="kserve/seaweedfs-tls-serving-7fd5766db9-qtc9m" Apr 23 18:01:42.396499 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:42.396475 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"seaweedfs-tls-serving\" (UniqueName: \"kubernetes.io/projected/b493e048-e303-4172-9840-2e03434d0484-seaweedfs-tls-serving\") pod \"seaweedfs-tls-serving-7fd5766db9-qtc9m\" (UID: \"b493e048-e303-4172-9840-2e03434d0484\") " pod="kserve/seaweedfs-tls-serving-7fd5766db9-qtc9m" Apr 23 18:01:42.562586 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:42.562547 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"default-dockercfg-8fzpf\"" Apr 23 18:01:42.570846 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:42.570825 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/seaweedfs-tls-serving-7fd5766db9-qtc9m" Apr 23 18:01:42.694229 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:42.694207 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/seaweedfs-tls-serving-7fd5766db9-qtc9m"] Apr 23 18:01:42.696173 ip-10-0-142-106 kubenswrapper[2574]: W0423 18:01:42.696148 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb493e048_e303_4172_9840_2e03434d0484.slice/crio-f711783f6101f1352d94e1e0ed88d633b99cdf0ab4de04fdc5a6ddcc546a15db WatchSource:0}: Error finding container f711783f6101f1352d94e1e0ed88d633b99cdf0ab4de04fdc5a6ddcc546a15db: Status 404 returned error can't find the container with id f711783f6101f1352d94e1e0ed88d633b99cdf0ab4de04fdc5a6ddcc546a15db Apr 23 18:01:43.062745 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:43.062717 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"seaweedfs-tls-serving-artifact\"" Apr 23 18:01:43.245275 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:43.245241 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/seaweedfs-tls-serving-7fd5766db9-qtc9m" event={"ID":"b493e048-e303-4172-9840-2e03434d0484","Type":"ContainerStarted","Data":"629c95f098cd2afb0425be780b427d6a48419cf69c1266d7fec52f7f3b7dc3ac"} Apr 23 18:01:43.245414 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:43.245281 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/seaweedfs-tls-serving-7fd5766db9-qtc9m" event={"ID":"b493e048-e303-4172-9840-2e03434d0484","Type":"ContainerStarted","Data":"f711783f6101f1352d94e1e0ed88d633b99cdf0ab4de04fdc5a6ddcc546a15db"} Apr 23 18:01:43.263905 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:01:43.263861 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/seaweedfs-tls-serving-7fd5766db9-qtc9m" podStartSLOduration=1.901080773 podStartE2EDuration="2.263847197s" podCreationTimestamp="2026-04-23 18:01:41 +0000 UTC" firstStartedPulling="2026-04-23 18:01:42.697297939 +0000 UTC m=+540.594751687" lastFinishedPulling="2026-04-23 18:01:43.060064347 +0000 UTC m=+540.957518111" observedRunningTime="2026-04-23 18:01:43.261726111 +0000 UTC m=+541.159179881" watchObservedRunningTime="2026-04-23 18:01:43.263847197 +0000 UTC m=+541.161300974" Apr 23 18:02:42.557768 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:02:42.557738 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/ovn-acl-logging/0.log" Apr 23 18:02:42.558728 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:02:42.558704 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/3.log" Apr 23 18:02:42.558890 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:02:42.558873 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/ovn-acl-logging/0.log" Apr 23 18:02:42.559973 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:02:42.559953 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/3.log" Apr 23 18:06:15.581231 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:15.581151 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-bb96c7b68-9s79w"] Apr 23 18:06:15.584306 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:15.584286 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bb96c7b68-9s79w" Apr 23 18:06:15.592884 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:15.592860 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-bb96c7b68-9s79w"] Apr 23 18:06:15.627804 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:15.627770 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/23ee327a-48f6-4f53-90b4-aac1a60786e7-oauth-serving-cert\") pod \"console-bb96c7b68-9s79w\" (UID: \"23ee327a-48f6-4f53-90b4-aac1a60786e7\") " pod="openshift-console/console-bb96c7b68-9s79w" Apr 23 18:06:15.627804 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:15.627821 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzpnr\" (UniqueName: \"kubernetes.io/projected/23ee327a-48f6-4f53-90b4-aac1a60786e7-kube-api-access-tzpnr\") pod \"console-bb96c7b68-9s79w\" (UID: \"23ee327a-48f6-4f53-90b4-aac1a60786e7\") " pod="openshift-console/console-bb96c7b68-9s79w" Apr 23 18:06:15.628042 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:15.627920 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/23ee327a-48f6-4f53-90b4-aac1a60786e7-console-config\") pod \"console-bb96c7b68-9s79w\" (UID: \"23ee327a-48f6-4f53-90b4-aac1a60786e7\") " pod="openshift-console/console-bb96c7b68-9s79w" Apr 23 18:06:15.628042 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:15.627963 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/23ee327a-48f6-4f53-90b4-aac1a60786e7-console-serving-cert\") pod \"console-bb96c7b68-9s79w\" (UID: \"23ee327a-48f6-4f53-90b4-aac1a60786e7\") " pod="openshift-console/console-bb96c7b68-9s79w" Apr 23 18:06:15.628042 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:15.627995 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/23ee327a-48f6-4f53-90b4-aac1a60786e7-service-ca\") pod \"console-bb96c7b68-9s79w\" (UID: \"23ee327a-48f6-4f53-90b4-aac1a60786e7\") " pod="openshift-console/console-bb96c7b68-9s79w" Apr 23 18:06:15.628042 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:15.628029 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23ee327a-48f6-4f53-90b4-aac1a60786e7-trusted-ca-bundle\") pod \"console-bb96c7b68-9s79w\" (UID: \"23ee327a-48f6-4f53-90b4-aac1a60786e7\") " pod="openshift-console/console-bb96c7b68-9s79w" Apr 23 18:06:15.628187 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:15.628050 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/23ee327a-48f6-4f53-90b4-aac1a60786e7-console-oauth-config\") pod \"console-bb96c7b68-9s79w\" (UID: \"23ee327a-48f6-4f53-90b4-aac1a60786e7\") " pod="openshift-console/console-bb96c7b68-9s79w" Apr 23 18:06:15.728828 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:15.728788 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/23ee327a-48f6-4f53-90b4-aac1a60786e7-oauth-serving-cert\") pod \"console-bb96c7b68-9s79w\" (UID: \"23ee327a-48f6-4f53-90b4-aac1a60786e7\") " pod="openshift-console/console-bb96c7b68-9s79w" Apr 23 18:06:15.728828 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:15.728832 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tzpnr\" (UniqueName: \"kubernetes.io/projected/23ee327a-48f6-4f53-90b4-aac1a60786e7-kube-api-access-tzpnr\") pod \"console-bb96c7b68-9s79w\" (UID: \"23ee327a-48f6-4f53-90b4-aac1a60786e7\") " pod="openshift-console/console-bb96c7b68-9s79w" Apr 23 18:06:15.729077 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:15.728906 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/23ee327a-48f6-4f53-90b4-aac1a60786e7-console-config\") pod \"console-bb96c7b68-9s79w\" (UID: \"23ee327a-48f6-4f53-90b4-aac1a60786e7\") " pod="openshift-console/console-bb96c7b68-9s79w" Apr 23 18:06:15.729077 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:15.728932 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/23ee327a-48f6-4f53-90b4-aac1a60786e7-console-serving-cert\") pod \"console-bb96c7b68-9s79w\" (UID: \"23ee327a-48f6-4f53-90b4-aac1a60786e7\") " pod="openshift-console/console-bb96c7b68-9s79w" Apr 23 18:06:15.729077 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:15.728955 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/23ee327a-48f6-4f53-90b4-aac1a60786e7-service-ca\") pod \"console-bb96c7b68-9s79w\" (UID: \"23ee327a-48f6-4f53-90b4-aac1a60786e7\") " pod="openshift-console/console-bb96c7b68-9s79w" Apr 23 18:06:15.729077 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:15.728982 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23ee327a-48f6-4f53-90b4-aac1a60786e7-trusted-ca-bundle\") pod \"console-bb96c7b68-9s79w\" (UID: \"23ee327a-48f6-4f53-90b4-aac1a60786e7\") " pod="openshift-console/console-bb96c7b68-9s79w" Apr 23 18:06:15.729077 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:15.729015 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/23ee327a-48f6-4f53-90b4-aac1a60786e7-console-oauth-config\") pod \"console-bb96c7b68-9s79w\" (UID: \"23ee327a-48f6-4f53-90b4-aac1a60786e7\") " pod="openshift-console/console-bb96c7b68-9s79w" Apr 23 18:06:15.729657 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:15.729629 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/23ee327a-48f6-4f53-90b4-aac1a60786e7-oauth-serving-cert\") pod \"console-bb96c7b68-9s79w\" (UID: \"23ee327a-48f6-4f53-90b4-aac1a60786e7\") " pod="openshift-console/console-bb96c7b68-9s79w" Apr 23 18:06:15.729793 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:15.729673 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/23ee327a-48f6-4f53-90b4-aac1a60786e7-console-config\") pod \"console-bb96c7b68-9s79w\" (UID: \"23ee327a-48f6-4f53-90b4-aac1a60786e7\") " pod="openshift-console/console-bb96c7b68-9s79w" Apr 23 18:06:15.729793 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:15.729705 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/23ee327a-48f6-4f53-90b4-aac1a60786e7-service-ca\") pod \"console-bb96c7b68-9s79w\" (UID: \"23ee327a-48f6-4f53-90b4-aac1a60786e7\") " pod="openshift-console/console-bb96c7b68-9s79w" Apr 23 18:06:15.729882 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:15.729841 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23ee327a-48f6-4f53-90b4-aac1a60786e7-trusted-ca-bundle\") pod \"console-bb96c7b68-9s79w\" (UID: \"23ee327a-48f6-4f53-90b4-aac1a60786e7\") " pod="openshift-console/console-bb96c7b68-9s79w" Apr 23 18:06:15.731286 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:15.731257 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/23ee327a-48f6-4f53-90b4-aac1a60786e7-console-oauth-config\") pod \"console-bb96c7b68-9s79w\" (UID: \"23ee327a-48f6-4f53-90b4-aac1a60786e7\") " pod="openshift-console/console-bb96c7b68-9s79w" Apr 23 18:06:15.731396 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:15.731382 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/23ee327a-48f6-4f53-90b4-aac1a60786e7-console-serving-cert\") pod \"console-bb96c7b68-9s79w\" (UID: \"23ee327a-48f6-4f53-90b4-aac1a60786e7\") " pod="openshift-console/console-bb96c7b68-9s79w" Apr 23 18:06:15.738211 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:15.738191 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzpnr\" (UniqueName: \"kubernetes.io/projected/23ee327a-48f6-4f53-90b4-aac1a60786e7-kube-api-access-tzpnr\") pod \"console-bb96c7b68-9s79w\" (UID: \"23ee327a-48f6-4f53-90b4-aac1a60786e7\") " pod="openshift-console/console-bb96c7b68-9s79w" Apr 23 18:06:15.894193 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:15.894097 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bb96c7b68-9s79w" Apr 23 18:06:16.013917 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:16.013891 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-bb96c7b68-9s79w"] Apr 23 18:06:16.015886 ip-10-0-142-106 kubenswrapper[2574]: W0423 18:06:16.015862 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23ee327a_48f6_4f53_90b4_aac1a60786e7.slice/crio-f88c491491674a468c5a6b6bdaeae368337bb8548eee7cbb209bdcb9b1f71e25 WatchSource:0}: Error finding container f88c491491674a468c5a6b6bdaeae368337bb8548eee7cbb209bdcb9b1f71e25: Status 404 returned error can't find the container with id f88c491491674a468c5a6b6bdaeae368337bb8548eee7cbb209bdcb9b1f71e25 Apr 23 18:06:16.017499 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:16.017485 2574 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 18:06:16.100780 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:16.100742 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bb96c7b68-9s79w" event={"ID":"23ee327a-48f6-4f53-90b4-aac1a60786e7","Type":"ContainerStarted","Data":"f2cfa39a49e6f767eb69af7db99d173e6bdd812914228a51b209f9ac8bd2caee"} Apr 23 18:06:16.100942 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:16.100784 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bb96c7b68-9s79w" event={"ID":"23ee327a-48f6-4f53-90b4-aac1a60786e7","Type":"ContainerStarted","Data":"f88c491491674a468c5a6b6bdaeae368337bb8548eee7cbb209bdcb9b1f71e25"} Apr 23 18:06:16.120232 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:16.120175 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-bb96c7b68-9s79w" podStartSLOduration=1.120159345 podStartE2EDuration="1.120159345s" podCreationTimestamp="2026-04-23 18:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:06:16.119206247 +0000 UTC m=+814.016660018" watchObservedRunningTime="2026-04-23 18:06:16.120159345 +0000 UTC m=+814.017613115" Apr 23 18:06:25.895295 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:25.895257 2574 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-bb96c7b68-9s79w" Apr 23 18:06:25.895295 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:25.895297 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-bb96c7b68-9s79w" Apr 23 18:06:25.900051 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:25.900029 2574 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-bb96c7b68-9s79w" Apr 23 18:06:26.137107 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:26.137081 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-bb96c7b68-9s79w" Apr 23 18:06:26.187895 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:26.187813 2574 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-console/console-66d4b6db74-8sdzx"] Apr 23 18:06:51.206708 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:51.206632 2574 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-console/console-66d4b6db74-8sdzx" podUID="c312394a-d8e0-4056-aab6-7d6361de8521" containerName="console" containerID="cri-o://6ba471a405284910b32eeb1948786c48b222674fe7a1b4a98fdf0f6833abde15" gracePeriod=15 Apr 23 18:06:51.337752 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:51.337709 2574 patch_prober.go:28] interesting pod/console-66d4b6db74-8sdzx container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.132.0.15:8443/health\": dial tcp 10.132.0.15:8443: connect: connection refused" start-of-body= Apr 23 18:06:51.337911 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:51.337766 2574 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/console-66d4b6db74-8sdzx" podUID="c312394a-d8e0-4056-aab6-7d6361de8521" containerName="console" probeResult="failure" output="Get \"https://10.132.0.15:8443/health\": dial tcp 10.132.0.15:8443: connect: connection refused" Apr 23 18:06:51.453001 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:51.452979 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-66d4b6db74-8sdzx_c312394a-d8e0-4056-aab6-7d6361de8521/console/0.log" Apr 23 18:06:51.453117 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:51.453039 2574 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66d4b6db74-8sdzx" Apr 23 18:06:51.636232 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:51.636200 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c312394a-d8e0-4056-aab6-7d6361de8521-service-ca\") pod \"c312394a-d8e0-4056-aab6-7d6361de8521\" (UID: \"c312394a-d8e0-4056-aab6-7d6361de8521\") " Apr 23 18:06:51.636232 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:51.636242 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c312394a-d8e0-4056-aab6-7d6361de8521-oauth-serving-cert\") pod \"c312394a-d8e0-4056-aab6-7d6361de8521\" (UID: \"c312394a-d8e0-4056-aab6-7d6361de8521\") " Apr 23 18:06:51.636497 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:51.636272 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmz78\" (UniqueName: \"kubernetes.io/projected/c312394a-d8e0-4056-aab6-7d6361de8521-kube-api-access-nmz78\") pod \"c312394a-d8e0-4056-aab6-7d6361de8521\" (UID: \"c312394a-d8e0-4056-aab6-7d6361de8521\") " Apr 23 18:06:51.636497 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:51.636291 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c312394a-d8e0-4056-aab6-7d6361de8521-console-config\") pod \"c312394a-d8e0-4056-aab6-7d6361de8521\" (UID: \"c312394a-d8e0-4056-aab6-7d6361de8521\") " Apr 23 18:06:51.636497 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:51.636320 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c312394a-d8e0-4056-aab6-7d6361de8521-trusted-ca-bundle\") pod \"c312394a-d8e0-4056-aab6-7d6361de8521\" (UID: \"c312394a-d8e0-4056-aab6-7d6361de8521\") " Apr 23 18:06:51.636497 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:51.636369 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c312394a-d8e0-4056-aab6-7d6361de8521-console-serving-cert\") pod \"c312394a-d8e0-4056-aab6-7d6361de8521\" (UID: \"c312394a-d8e0-4056-aab6-7d6361de8521\") " Apr 23 18:06:51.636497 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:51.636396 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c312394a-d8e0-4056-aab6-7d6361de8521-console-oauth-config\") pod \"c312394a-d8e0-4056-aab6-7d6361de8521\" (UID: \"c312394a-d8e0-4056-aab6-7d6361de8521\") " Apr 23 18:06:51.636766 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:51.636730 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c312394a-d8e0-4056-aab6-7d6361de8521-console-config" (OuterVolumeSpecName: "console-config") pod "c312394a-d8e0-4056-aab6-7d6361de8521" (UID: "c312394a-d8e0-4056-aab6-7d6361de8521"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:06:51.636766 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:51.636751 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c312394a-d8e0-4056-aab6-7d6361de8521-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "c312394a-d8e0-4056-aab6-7d6361de8521" (UID: "c312394a-d8e0-4056-aab6-7d6361de8521"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:06:51.636854 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:51.636650 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c312394a-d8e0-4056-aab6-7d6361de8521-service-ca" (OuterVolumeSpecName: "service-ca") pod "c312394a-d8e0-4056-aab6-7d6361de8521" (UID: "c312394a-d8e0-4056-aab6-7d6361de8521"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:06:51.636915 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:51.636877 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c312394a-d8e0-4056-aab6-7d6361de8521-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "c312394a-d8e0-4056-aab6-7d6361de8521" (UID: "c312394a-d8e0-4056-aab6-7d6361de8521"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:06:51.638675 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:51.638649 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c312394a-d8e0-4056-aab6-7d6361de8521-kube-api-access-nmz78" (OuterVolumeSpecName: "kube-api-access-nmz78") pod "c312394a-d8e0-4056-aab6-7d6361de8521" (UID: "c312394a-d8e0-4056-aab6-7d6361de8521"). InnerVolumeSpecName "kube-api-access-nmz78". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:06:51.638759 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:51.638698 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c312394a-d8e0-4056-aab6-7d6361de8521-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "c312394a-d8e0-4056-aab6-7d6361de8521" (UID: "c312394a-d8e0-4056-aab6-7d6361de8521"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:06:51.638874 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:51.638851 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c312394a-d8e0-4056-aab6-7d6361de8521-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "c312394a-d8e0-4056-aab6-7d6361de8521" (UID: "c312394a-d8e0-4056-aab6-7d6361de8521"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:06:51.737485 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:51.737448 2574 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c312394a-d8e0-4056-aab6-7d6361de8521-console-serving-cert\") on node \"ip-10-0-142-106.ec2.internal\" DevicePath \"\"" Apr 23 18:06:51.737485 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:51.737477 2574 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c312394a-d8e0-4056-aab6-7d6361de8521-console-oauth-config\") on node \"ip-10-0-142-106.ec2.internal\" DevicePath \"\"" Apr 23 18:06:51.737485 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:51.737487 2574 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c312394a-d8e0-4056-aab6-7d6361de8521-service-ca\") on node \"ip-10-0-142-106.ec2.internal\" DevicePath \"\"" Apr 23 18:06:51.737485 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:51.737496 2574 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c312394a-d8e0-4056-aab6-7d6361de8521-oauth-serving-cert\") on node \"ip-10-0-142-106.ec2.internal\" DevicePath \"\"" Apr 23 18:06:51.737860 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:51.737506 2574 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmz78\" (UniqueName: \"kubernetes.io/projected/c312394a-d8e0-4056-aab6-7d6361de8521-kube-api-access-nmz78\") on node \"ip-10-0-142-106.ec2.internal\" DevicePath \"\"" Apr 23 18:06:51.737860 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:51.737516 2574 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c312394a-d8e0-4056-aab6-7d6361de8521-console-config\") on node \"ip-10-0-142-106.ec2.internal\" DevicePath \"\"" Apr 23 18:06:51.737860 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:51.737525 2574 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c312394a-d8e0-4056-aab6-7d6361de8521-trusted-ca-bundle\") on node \"ip-10-0-142-106.ec2.internal\" DevicePath \"\"" Apr 23 18:06:52.214206 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:52.214180 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-66d4b6db74-8sdzx_c312394a-d8e0-4056-aab6-7d6361de8521/console/0.log" Apr 23 18:06:52.214618 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:52.214218 2574 generic.go:358] "Generic (PLEG): container finished" podID="c312394a-d8e0-4056-aab6-7d6361de8521" containerID="6ba471a405284910b32eeb1948786c48b222674fe7a1b4a98fdf0f6833abde15" exitCode=2 Apr 23 18:06:52.214618 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:52.214250 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66d4b6db74-8sdzx" event={"ID":"c312394a-d8e0-4056-aab6-7d6361de8521","Type":"ContainerDied","Data":"6ba471a405284910b32eeb1948786c48b222674fe7a1b4a98fdf0f6833abde15"} Apr 23 18:06:52.214618 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:52.214283 2574 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66d4b6db74-8sdzx" Apr 23 18:06:52.214618 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:52.214298 2574 scope.go:117] "RemoveContainer" containerID="6ba471a405284910b32eeb1948786c48b222674fe7a1b4a98fdf0f6833abde15" Apr 23 18:06:52.214618 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:52.214286 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66d4b6db74-8sdzx" event={"ID":"c312394a-d8e0-4056-aab6-7d6361de8521","Type":"ContainerDied","Data":"037615e5fac32a8f6ded482af3178661741f779970391545ff59ff6853a8aa82"} Apr 23 18:06:52.224548 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:52.224530 2574 scope.go:117] "RemoveContainer" containerID="6ba471a405284910b32eeb1948786c48b222674fe7a1b4a98fdf0f6833abde15" Apr 23 18:06:52.224827 ip-10-0-142-106 kubenswrapper[2574]: E0423 18:06:52.224805 2574 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ba471a405284910b32eeb1948786c48b222674fe7a1b4a98fdf0f6833abde15\": container with ID starting with 6ba471a405284910b32eeb1948786c48b222674fe7a1b4a98fdf0f6833abde15 not found: ID does not exist" containerID="6ba471a405284910b32eeb1948786c48b222674fe7a1b4a98fdf0f6833abde15" Apr 23 18:06:52.224878 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:52.224836 2574 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ba471a405284910b32eeb1948786c48b222674fe7a1b4a98fdf0f6833abde15"} err="failed to get container status \"6ba471a405284910b32eeb1948786c48b222674fe7a1b4a98fdf0f6833abde15\": rpc error: code = NotFound desc = could not find container \"6ba471a405284910b32eeb1948786c48b222674fe7a1b4a98fdf0f6833abde15\": container with ID starting with 6ba471a405284910b32eeb1948786c48b222674fe7a1b4a98fdf0f6833abde15 not found: ID does not exist" Apr 23 18:06:52.238828 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:52.238805 2574 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-console/console-66d4b6db74-8sdzx"] Apr 23 18:06:52.242635 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:52.242610 2574 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-66d4b6db74-8sdzx"] Apr 23 18:06:52.646060 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:06:52.645982 2574 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c312394a-d8e0-4056-aab6-7d6361de8521" path="/var/lib/kubelet/pods/c312394a-d8e0-4056-aab6-7d6361de8521/volumes" Apr 23 18:07:42.582327 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:07:42.582303 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/ovn-acl-logging/0.log" Apr 23 18:07:42.583413 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:07:42.583383 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/3.log" Apr 23 18:07:42.584897 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:07:42.584879 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/ovn-acl-logging/0.log" Apr 23 18:07:42.586048 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:07:42.586033 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/3.log" Apr 23 18:12:42.609374 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:12:42.609347 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/ovn-acl-logging/0.log" Apr 23 18:12:42.610522 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:12:42.610491 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/3.log" Apr 23 18:12:42.610875 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:12:42.610858 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/ovn-acl-logging/0.log" Apr 23 18:12:42.612028 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:12:42.611996 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/3.log" Apr 23 18:17:42.635162 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:17:42.635132 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/ovn-acl-logging/0.log" Apr 23 18:17:42.636279 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:17:42.636253 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/3.log" Apr 23 18:17:42.636893 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:17:42.636873 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/ovn-acl-logging/0.log" Apr 23 18:17:42.637974 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:17:42.637957 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/3.log" Apr 23 18:22:42.655675 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:22:42.655648 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/ovn-acl-logging/0.log" Apr 23 18:22:42.656893 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:22:42.656868 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/3.log" Apr 23 18:22:42.658502 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:22:42.658483 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/ovn-acl-logging/0.log" Apr 23 18:22:42.659383 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:22:42.659366 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/3.log" Apr 23 18:27:42.682138 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:27:42.682107 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/ovn-acl-logging/0.log" Apr 23 18:27:42.683514 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:27:42.683491 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/3.log" Apr 23 18:27:42.684859 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:27:42.684839 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/ovn-acl-logging/0.log" Apr 23 18:27:42.685912 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:27:42.685894 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/3.log" Apr 23 18:32:42.702826 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:32:42.702787 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/ovn-acl-logging/0.log" Apr 23 18:32:42.704161 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:32:42.704133 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/3.log" Apr 23 18:32:42.705850 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:32:42.705833 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/ovn-acl-logging/0.log" Apr 23 18:32:42.706811 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:32:42.706790 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/3.log" Apr 23 18:37:42.723821 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:37:42.723791 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/ovn-acl-logging/0.log" Apr 23 18:37:42.724851 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:37:42.724833 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/3.log" Apr 23 18:37:42.726090 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:37:42.726073 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/ovn-acl-logging/0.log" Apr 23 18:37:42.727080 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:37:42.727065 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/3.log" Apr 23 18:42:42.753209 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:42:42.753184 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/ovn-acl-logging/0.log" Apr 23 18:42:42.754407 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:42:42.754384 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/3.log" Apr 23 18:42:42.756337 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:42:42.756320 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/ovn-acl-logging/0.log" Apr 23 18:42:42.757401 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:42:42.757378 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/3.log" Apr 23 18:47:42.772424 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:47:42.772396 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/ovn-acl-logging/0.log" Apr 23 18:47:42.773370 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:47:42.773352 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/3.log" Apr 23 18:47:42.777563 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:47:42.777535 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/ovn-acl-logging/0.log" Apr 23 18:47:42.778505 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:47:42.778478 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/3.log" Apr 23 18:52:42.793846 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:52:42.793815 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/ovn-acl-logging/0.log" Apr 23 18:52:42.794776 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:52:42.794752 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/3.log" Apr 23 18:52:42.798086 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:52:42.798065 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/ovn-acl-logging/0.log" Apr 23 18:52:42.799072 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:52:42.799056 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/3.log" Apr 23 18:57:42.814751 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:57:42.814709 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/ovn-acl-logging/0.log" Apr 23 18:57:42.815884 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:57:42.815854 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/3.log" Apr 23 18:57:42.819854 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:57:42.819833 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/ovn-acl-logging/0.log" Apr 23 18:57:42.820906 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:57:42.820887 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-142-106.ec2.internal_2e1ed6752f88ed3103e33f18a9adc980/kube-rbac-proxy-crio/3.log" Apr 23 18:57:59.129004 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:57:59.128964 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_global-pull-secret-syncer-p5ndb_f5d25d8b-ddd9-4d17-ad8d-3eb35aadb1bb/global-pull-secret-syncer/0.log" Apr 23 18:57:59.316363 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:57:59.316337 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_konnectivity-agent-wx25k_0dbc6169-c545-4cc9-a3ea-83c161f64108/konnectivity-agent/0.log" Apr 23 18:57:59.433321 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:57:59.433232 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_kube-apiserver-proxy-ip-10-0-142-106.ec2.internal_bad58ef41963d01887fbfb46c2febb18/haproxy/0.log" Apr 23 18:58:03.027980 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:03.027941 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_metrics-server-574f7989c4-mftsr_61cf4b7e-bc78-4b06-a4ed-bbbcd8031991/metrics-server/0.log" Apr 23 18:58:03.058309 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:03.058281 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_monitoring-plugin-7dccd58f55-hb655_e9b2584d-b2ad-4cda-af50-a4d6572658b0/monitoring-plugin/0.log" Apr 23 18:58:03.189320 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:03.189295 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-9s9cp_c83e50d5-4354-484c-97ef-786bd15344a0/node-exporter/0.log" Apr 23 18:58:03.213744 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:03.213723 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-9s9cp_c83e50d5-4354-484c-97ef-786bd15344a0/kube-rbac-proxy/0.log" Apr 23 18:58:03.240854 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:03.240832 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-9s9cp_c83e50d5-4354-484c-97ef-786bd15344a0/init-textfile/0.log" Apr 23 18:58:03.772852 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:03.772819 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-666ccd8c8f-k7gzj_6628d669-5605-4946-ad62-a1f2c5adce5c/telemeter-client/0.log" Apr 23 18:58:03.802540 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:03.802509 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-666ccd8c8f-k7gzj_6628d669-5605-4946-ad62-a1f2c5adce5c/reload/0.log" Apr 23 18:58:03.830074 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:03.830050 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-666ccd8c8f-k7gzj_6628d669-5605-4946-ad62-a1f2c5adce5c/kube-rbac-proxy/0.log" Apr 23 18:58:06.063964 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:06.063935 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-bb96c7b68-9s79w_23ee327a-48f6-4f53-90b4-aac1a60786e7/console/0.log" Apr 23 18:58:06.141269 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:06.141234 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pzxs4/perf-node-gather-daemonset-fcm5d"] Apr 23 18:58:06.142285 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:06.142257 2574 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c312394a-d8e0-4056-aab6-7d6361de8521" containerName="console" Apr 23 18:58:06.142451 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:06.142439 2574 state_mem.go:107] "Deleted CPUSet assignment" podUID="c312394a-d8e0-4056-aab6-7d6361de8521" containerName="console" Apr 23 18:58:06.142751 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:06.142735 2574 memory_manager.go:356] "RemoveStaleState removing state" podUID="c312394a-d8e0-4056-aab6-7d6361de8521" containerName="console" Apr 23 18:58:06.146330 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:06.146308 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-fcm5d" Apr 23 18:58:06.148947 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:06.148928 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-pzxs4\"/\"openshift-service-ca.crt\"" Apr 23 18:58:06.150091 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:06.150070 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-pzxs4\"/\"default-dockercfg-92lwm\"" Apr 23 18:58:06.150191 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:06.150101 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-pzxs4\"/\"kube-root-ca.crt\"" Apr 23 18:58:06.153135 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:06.153113 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pzxs4/perf-node-gather-daemonset-fcm5d"] Apr 23 18:58:06.174607 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:06.174560 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1142e5c-c453-4dc5-a1dc-c0cdb592d0b0-lib-modules\") pod \"perf-node-gather-daemonset-fcm5d\" (UID: \"d1142e5c-c453-4dc5-a1dc-c0cdb592d0b0\") " pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-fcm5d" Apr 23 18:58:06.174705 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:06.174629 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/d1142e5c-c453-4dc5-a1dc-c0cdb592d0b0-podres\") pod \"perf-node-gather-daemonset-fcm5d\" (UID: \"d1142e5c-c453-4dc5-a1dc-c0cdb592d0b0\") " pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-fcm5d" Apr 23 18:58:06.174705 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:06.174674 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/d1142e5c-c453-4dc5-a1dc-c0cdb592d0b0-proc\") pod \"perf-node-gather-daemonset-fcm5d\" (UID: \"d1142e5c-c453-4dc5-a1dc-c0cdb592d0b0\") " pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-fcm5d" Apr 23 18:58:06.174705 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:06.174692 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7rwl\" (UniqueName: \"kubernetes.io/projected/d1142e5c-c453-4dc5-a1dc-c0cdb592d0b0-kube-api-access-l7rwl\") pod \"perf-node-gather-daemonset-fcm5d\" (UID: \"d1142e5c-c453-4dc5-a1dc-c0cdb592d0b0\") " pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-fcm5d" Apr 23 18:58:06.174810 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:06.174767 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d1142e5c-c453-4dc5-a1dc-c0cdb592d0b0-sys\") pod \"perf-node-gather-daemonset-fcm5d\" (UID: \"d1142e5c-c453-4dc5-a1dc-c0cdb592d0b0\") " pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-fcm5d" Apr 23 18:58:06.275085 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:06.275059 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/d1142e5c-c453-4dc5-a1dc-c0cdb592d0b0-proc\") pod \"perf-node-gather-daemonset-fcm5d\" (UID: \"d1142e5c-c453-4dc5-a1dc-c0cdb592d0b0\") " pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-fcm5d" Apr 23 18:58:06.275178 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:06.275090 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7rwl\" (UniqueName: \"kubernetes.io/projected/d1142e5c-c453-4dc5-a1dc-c0cdb592d0b0-kube-api-access-l7rwl\") pod \"perf-node-gather-daemonset-fcm5d\" (UID: \"d1142e5c-c453-4dc5-a1dc-c0cdb592d0b0\") " pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-fcm5d" Apr 23 18:58:06.275178 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:06.275119 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d1142e5c-c453-4dc5-a1dc-c0cdb592d0b0-sys\") pod \"perf-node-gather-daemonset-fcm5d\" (UID: \"d1142e5c-c453-4dc5-a1dc-c0cdb592d0b0\") " pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-fcm5d" Apr 23 18:58:06.275178 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:06.275167 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1142e5c-c453-4dc5-a1dc-c0cdb592d0b0-lib-modules\") pod \"perf-node-gather-daemonset-fcm5d\" (UID: \"d1142e5c-c453-4dc5-a1dc-c0cdb592d0b0\") " pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-fcm5d" Apr 23 18:58:06.275300 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:06.275176 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/d1142e5c-c453-4dc5-a1dc-c0cdb592d0b0-proc\") pod \"perf-node-gather-daemonset-fcm5d\" (UID: \"d1142e5c-c453-4dc5-a1dc-c0cdb592d0b0\") " pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-fcm5d" Apr 23 18:58:06.275300 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:06.275184 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/d1142e5c-c453-4dc5-a1dc-c0cdb592d0b0-podres\") pod \"perf-node-gather-daemonset-fcm5d\" (UID: \"d1142e5c-c453-4dc5-a1dc-c0cdb592d0b0\") " pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-fcm5d" Apr 23 18:58:06.275300 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:06.275263 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d1142e5c-c453-4dc5-a1dc-c0cdb592d0b0-sys\") pod \"perf-node-gather-daemonset-fcm5d\" (UID: \"d1142e5c-c453-4dc5-a1dc-c0cdb592d0b0\") " pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-fcm5d" Apr 23 18:58:06.275300 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:06.275284 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/d1142e5c-c453-4dc5-a1dc-c0cdb592d0b0-podres\") pod \"perf-node-gather-daemonset-fcm5d\" (UID: \"d1142e5c-c453-4dc5-a1dc-c0cdb592d0b0\") " pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-fcm5d" Apr 23 18:58:06.275440 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:06.275322 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1142e5c-c453-4dc5-a1dc-c0cdb592d0b0-lib-modules\") pod \"perf-node-gather-daemonset-fcm5d\" (UID: \"d1142e5c-c453-4dc5-a1dc-c0cdb592d0b0\") " pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-fcm5d" Apr 23 18:58:06.284584 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:06.284558 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7rwl\" (UniqueName: \"kubernetes.io/projected/d1142e5c-c453-4dc5-a1dc-c0cdb592d0b0-kube-api-access-l7rwl\") pod \"perf-node-gather-daemonset-fcm5d\" (UID: \"d1142e5c-c453-4dc5-a1dc-c0cdb592d0b0\") " pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-fcm5d" Apr 23 18:58:06.458791 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:06.458740 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-fcm5d" Apr 23 18:58:06.586775 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:06.586750 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pzxs4/perf-node-gather-daemonset-fcm5d"] Apr 23 18:58:06.589039 ip-10-0-142-106 kubenswrapper[2574]: W0423 18:58:06.589016 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd1142e5c_c453_4dc5_a1dc_c0cdb592d0b0.slice/crio-2d34f4ff11ec55fcb1bc76d186ae508af1c3b1aad0eada60cf2c910d443978df WatchSource:0}: Error finding container 2d34f4ff11ec55fcb1bc76d186ae508af1c3b1aad0eada60cf2c910d443978df: Status 404 returned error can't find the container with id 2d34f4ff11ec55fcb1bc76d186ae508af1c3b1aad0eada60cf2c910d443978df Apr 23 18:58:06.590602 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:06.590565 2574 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 18:58:06.857950 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:06.857914 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-fcm5d" event={"ID":"d1142e5c-c453-4dc5-a1dc-c0cdb592d0b0","Type":"ContainerStarted","Data":"96fc1ae1f467fb855a3330a651ef07c62b4204dc07d185dcff8f5576ae286c89"} Apr 23 18:58:06.857950 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:06.857953 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-fcm5d" event={"ID":"d1142e5c-c453-4dc5-a1dc-c0cdb592d0b0","Type":"ContainerStarted","Data":"2d34f4ff11ec55fcb1bc76d186ae508af1c3b1aad0eada60cf2c910d443978df"} Apr 23 18:58:06.858108 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:06.858048 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-fcm5d" Apr 23 18:58:06.878831 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:06.878784 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-fcm5d" podStartSLOduration=0.878773487 podStartE2EDuration="878.773487ms" podCreationTimestamp="2026-04-23 18:58:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:58:06.877602473 +0000 UTC m=+3924.775056245" watchObservedRunningTime="2026-04-23 18:58:06.878773487 +0000 UTC m=+3924.776227256" Apr 23 18:58:07.436560 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:07.436527 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-n9t2k_c0717f3c-f89c-4cad-a2c7-5e017bcc9292/dns/0.log" Apr 23 18:58:07.460182 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:07.460150 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-n9t2k_c0717f3c-f89c-4cad-a2c7-5e017bcc9292/kube-rbac-proxy/0.log" Apr 23 18:58:07.545909 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:07.545883 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-cnp6f_4fed6b9f-295e-4b13-8a53-cddd432bda46/dns-node-resolver/0.log" Apr 23 18:58:08.082409 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:08.082381 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_image-registry-7dc86d8d7f-wg7qk_ab4270c5-eb00-4f8a-8f0e-3386237c56e1/registry/0.log" Apr 23 18:58:08.170165 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:08.170136 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-xcs8j_9e31c73e-77bf-4968-b370-f732e248be97/node-ca/0.log" Apr 23 18:58:09.387937 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:09.387901 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-kms9t_63127b59-6d72-4d18-85c3-8766abc25908/serve-healthcheck-canary/0.log" Apr 23 18:58:09.954205 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:09.954167 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-th8sw_d7a76e75-dee9-437f-afaf-611235bcda31/kube-rbac-proxy/0.log" Apr 23 18:58:09.981413 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:09.981381 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-th8sw_d7a76e75-dee9-437f-afaf-611235bcda31/exporter/0.log" Apr 23 18:58:10.011395 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:10.011374 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-th8sw_d7a76e75-dee9-437f-afaf-611235bcda31/extractor/0.log" Apr 23 18:58:12.317456 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:12.317420 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_llmisvc-controller-manager-68cc5db7c4-lfff6_f793afa2-cfb2-422f-924a-9992608ca10c/manager/0.log" Apr 23 18:58:12.344704 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:12.344675 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_model-serving-api-86f7b4b499-rl6jq_96e2ad04-9dfa-4fb4-99b1-e68969c34d0f/server/0.log" Apr 23 18:58:12.759212 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:12.759178 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_odh-model-controller-696fc77849-z46vh_e8b66bea-9e43-4836-96c0-d927a3187933/manager/0.log" Apr 23 18:58:12.870729 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:12.870704 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-pzxs4/perf-node-gather-daemonset-fcm5d" Apr 23 18:58:12.942159 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:12.942127 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_seaweedfs-tls-serving-7fd5766db9-qtc9m_b493e048-e303-4172-9840-2e03434d0484/seaweedfs-tls-serving/0.log" Apr 23 18:58:19.052145 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:19.052108 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5bjmz_5645d713-95ce-41af-878d-48178971c03c/kube-multus/0.log" Apr 23 18:58:19.082431 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:19.082409 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-4zn98_84132246-7311-4103-a045-d865e6d62737/kube-multus-additional-cni-plugins/0.log" Apr 23 18:58:19.106643 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:19.106619 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-4zn98_84132246-7311-4103-a045-d865e6d62737/egress-router-binary-copy/0.log" Apr 23 18:58:19.129853 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:19.129830 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-4zn98_84132246-7311-4103-a045-d865e6d62737/cni-plugins/0.log" Apr 23 18:58:19.154373 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:19.154354 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-4zn98_84132246-7311-4103-a045-d865e6d62737/bond-cni-plugin/0.log" Apr 23 18:58:19.179252 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:19.179234 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-4zn98_84132246-7311-4103-a045-d865e6d62737/routeoverride-cni/0.log" Apr 23 18:58:19.211706 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:19.211686 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-4zn98_84132246-7311-4103-a045-d865e6d62737/whereabouts-cni-bincopy/0.log" Apr 23 18:58:19.235637 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:19.235613 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-4zn98_84132246-7311-4103-a045-d865e6d62737/whereabouts-cni/0.log" Apr 23 18:58:19.753548 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:19.753518 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-45ztw_5af1b6bf-71a6-4257-9a8a-b48c1c14659c/network-metrics-daemon/0.log" Apr 23 18:58:19.777513 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:19.777484 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-45ztw_5af1b6bf-71a6-4257-9a8a-b48c1c14659c/kube-rbac-proxy/0.log" Apr 23 18:58:20.656481 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:20.656450 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/ovn-controller/0.log" Apr 23 18:58:20.677264 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:20.677236 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/ovn-acl-logging/0.log" Apr 23 18:58:20.709479 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:20.709452 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/ovn-acl-logging/1.log" Apr 23 18:58:20.729774 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:20.729752 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/kube-rbac-proxy-node/0.log" Apr 23 18:58:20.754651 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:20.754633 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/kube-rbac-proxy-ovn-metrics/0.log" Apr 23 18:58:20.780161 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:20.780140 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/northd/0.log" Apr 23 18:58:20.805540 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:20.805516 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/nbdb/0.log" Apr 23 18:58:20.829997 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:20.829976 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/sbdb/0.log" Apr 23 18:58:20.998052 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:20.998030 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2kfrf_11c76c2e-7e8d-4076-bf3e-40c9a12aad39/ovnkube-controller/0.log" Apr 23 18:58:22.941398 ip-10-0-142-106 kubenswrapper[2574]: I0423 18:58:22.941363 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-target-88zs6_35ee14f0-f248-4da4-a578-5901f2cd8f5f/network-check-target-container/0.log"