Apr 23 17:49:29.074703 ip-10-0-133-178 systemd[1]: kubelet.service: Failed to load environment files: No such file or directory Apr 23 17:49:29.074714 ip-10-0-133-178 systemd[1]: kubelet.service: Failed to run 'start-pre' task: No such file or directory Apr 23 17:49:29.074721 ip-10-0-133-178 systemd[1]: kubelet.service: Failed with result 'resources'. Apr 23 17:49:29.074984 ip-10-0-133-178 systemd[1]: Failed to start Kubernetes Kubelet. Apr 23 17:49:39.309969 ip-10-0-133-178 systemd[1]: kubelet.service: Failed to schedule restart job: Unit crio.service not found. Apr 23 17:49:39.309986 ip-10-0-133-178 systemd[1]: kubelet.service: Failed with result 'resources'. -- Boot 461aad3fe5614282962b0adeb3c51a7f -- Apr 23 17:52:04.664726 ip-10-0-133-178 systemd[1]: Starting Kubernetes Kubelet... Apr 23 17:52:05.180948 ip-10-0-133-178 kubenswrapper[2572]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 23 17:52:05.180948 ip-10-0-133-178 kubenswrapper[2572]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Apr 23 17:52:05.180948 ip-10-0-133-178 kubenswrapper[2572]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 23 17:52:05.180948 ip-10-0-133-178 kubenswrapper[2572]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 23 17:52:05.180948 ip-10-0-133-178 kubenswrapper[2572]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 23 17:52:05.183807 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.183729 2572 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 23 17:52:05.187812 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187797 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:52:05.187812 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187812 2572 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:52:05.187878 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187815 2572 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:52:05.187878 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187819 2572 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:52:05.187878 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187822 2572 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:52:05.187878 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187825 2572 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:52:05.187878 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187828 2572 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:52:05.187878 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187830 2572 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:52:05.187878 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187833 2572 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:52:05.187878 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187836 2572 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:52:05.187878 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187839 2572 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:52:05.187878 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187842 2572 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:52:05.187878 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187848 2572 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:52:05.187878 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187851 2572 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:52:05.187878 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187854 2572 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:52:05.187878 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187856 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:52:05.187878 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187859 2572 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:52:05.187878 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187861 2572 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:52:05.187878 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187864 2572 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:52:05.187878 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187867 2572 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:52:05.187878 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187869 2572 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:52:05.187878 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187872 2572 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:52:05.188504 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187874 2572 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:52:05.188504 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187877 2572 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:52:05.188504 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187880 2572 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:52:05.188504 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187882 2572 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:52:05.188504 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187885 2572 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:52:05.188504 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187888 2572 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:52:05.188504 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187891 2572 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:52:05.188504 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187893 2572 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:52:05.188504 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187897 2572 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:52:05.188504 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187901 2572 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:52:05.188504 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187904 2572 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:52:05.188504 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187907 2572 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:52:05.188504 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187910 2572 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:52:05.188504 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187913 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:52:05.188504 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187916 2572 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:52:05.188504 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187918 2572 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:52:05.188504 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187921 2572 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:52:05.188504 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187923 2572 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:52:05.188504 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187925 2572 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:52:05.189175 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187928 2572 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:52:05.189175 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187930 2572 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:52:05.189175 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187933 2572 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:52:05.189175 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187935 2572 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:52:05.189175 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187938 2572 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:52:05.189175 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187941 2572 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:52:05.189175 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187943 2572 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:52:05.189175 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187946 2572 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:52:05.189175 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187949 2572 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:52:05.189175 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187953 2572 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:52:05.189175 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187956 2572 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:52:05.189175 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187958 2572 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:52:05.189175 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187961 2572 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:52:05.189175 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187963 2572 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:52:05.189175 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187967 2572 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:52:05.189175 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187969 2572 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:52:05.189175 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187972 2572 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:52:05.189175 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187975 2572 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:52:05.189175 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187977 2572 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:52:05.189715 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187980 2572 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:52:05.189715 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187984 2572 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:52:05.189715 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187986 2572 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:52:05.189715 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187989 2572 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:52:05.189715 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187991 2572 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:52:05.189715 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187994 2572 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:52:05.189715 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.187997 2572 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:52:05.189715 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.188000 2572 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:52:05.189715 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.188002 2572 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:52:05.189715 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.188005 2572 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:52:05.189715 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.188008 2572 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:52:05.189715 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.188010 2572 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:52:05.189715 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.188012 2572 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:52:05.189715 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.188015 2572 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:52:05.189715 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.188017 2572 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:52:05.189715 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.188020 2572 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:52:05.189715 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.188022 2572 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:52:05.189715 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.188025 2572 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:52:05.189715 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.188028 2572 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:52:05.189715 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.188030 2572 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:52:05.190206 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.188033 2572 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:52:05.190206 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.188036 2572 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:52:05.190206 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.188038 2572 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:52:05.190206 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.188041 2572 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:52:05.190206 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.188044 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:52:05.190206 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.188046 2572 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:52:05.190206 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.189998 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:52:05.190206 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190010 2572 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:52:05.190206 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190014 2572 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:52:05.190206 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190017 2572 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:52:05.190206 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190020 2572 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:52:05.190206 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190022 2572 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:52:05.190206 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190025 2572 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:52:05.190206 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190027 2572 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:52:05.190206 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190030 2572 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:52:05.190206 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190032 2572 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:52:05.190206 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190035 2572 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:52:05.190206 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190038 2572 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:52:05.190206 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190040 2572 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:52:05.190206 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190043 2572 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:52:05.190691 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190046 2572 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:52:05.190691 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190048 2572 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:52:05.190691 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190051 2572 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:52:05.190691 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190053 2572 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:52:05.190691 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190056 2572 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:52:05.190691 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190058 2572 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:52:05.190691 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190061 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:52:05.190691 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190063 2572 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:52:05.190691 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190066 2572 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:52:05.190691 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190068 2572 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:52:05.190691 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190071 2572 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:52:05.190691 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190074 2572 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:52:05.190691 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190077 2572 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:52:05.190691 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190080 2572 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:52:05.190691 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190082 2572 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:52:05.190691 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190085 2572 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:52:05.190691 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190087 2572 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:52:05.190691 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190090 2572 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:52:05.190691 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190092 2572 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:52:05.190691 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190096 2572 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:52:05.191220 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190098 2572 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:52:05.191220 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190101 2572 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:52:05.191220 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190103 2572 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:52:05.191220 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190106 2572 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:52:05.191220 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190109 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:52:05.191220 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190111 2572 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:52:05.191220 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190114 2572 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:52:05.191220 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190117 2572 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:52:05.191220 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190119 2572 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:52:05.191220 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190122 2572 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:52:05.191220 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190125 2572 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:52:05.191220 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190127 2572 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:52:05.191220 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190130 2572 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:52:05.191220 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190132 2572 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:52:05.191220 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190135 2572 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:52:05.191220 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190137 2572 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:52:05.191220 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190140 2572 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:52:05.191220 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190142 2572 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:52:05.191220 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190145 2572 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:52:05.191220 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190147 2572 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:52:05.191718 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190150 2572 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:52:05.191718 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190153 2572 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:52:05.191718 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190156 2572 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:52:05.191718 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190159 2572 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:52:05.191718 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190161 2572 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:52:05.191718 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190163 2572 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:52:05.191718 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190166 2572 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:52:05.191718 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190169 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:52:05.191718 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190171 2572 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:52:05.191718 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190174 2572 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:52:05.191718 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190176 2572 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:52:05.191718 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190180 2572 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:52:05.191718 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190183 2572 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:52:05.191718 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190185 2572 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:52:05.191718 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190187 2572 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:52:05.191718 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190190 2572 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:52:05.191718 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190193 2572 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:52:05.191718 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190195 2572 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:52:05.191718 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190198 2572 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:52:05.191718 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190200 2572 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:52:05.192203 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190203 2572 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:52:05.192203 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190206 2572 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:52:05.192203 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190208 2572 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:52:05.192203 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190211 2572 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:52:05.192203 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190214 2572 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:52:05.192203 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190219 2572 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:52:05.192203 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190223 2572 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:52:05.192203 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190226 2572 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:52:05.192203 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190230 2572 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:52:05.192203 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190235 2572 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:52:05.192203 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190238 2572 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:52:05.192203 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190241 2572 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:52:05.192203 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190310 2572 flags.go:64] FLAG: --address="0.0.0.0" Apr 23 17:52:05.192203 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190318 2572 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Apr 23 17:52:05.192203 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190324 2572 flags.go:64] FLAG: --anonymous-auth="true" Apr 23 17:52:05.192203 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190329 2572 flags.go:64] FLAG: --application-metrics-count-limit="100" Apr 23 17:52:05.192203 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190333 2572 flags.go:64] FLAG: --authentication-token-webhook="false" Apr 23 17:52:05.192203 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190336 2572 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Apr 23 17:52:05.192203 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190340 2572 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Apr 23 17:52:05.192203 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190345 2572 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Apr 23 17:52:05.192708 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190348 2572 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Apr 23 17:52:05.192708 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190351 2572 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Apr 23 17:52:05.192708 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190355 2572 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Apr 23 17:52:05.192708 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190359 2572 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Apr 23 17:52:05.192708 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190362 2572 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Apr 23 17:52:05.192708 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190365 2572 flags.go:64] FLAG: --cgroup-root="" Apr 23 17:52:05.192708 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190368 2572 flags.go:64] FLAG: --cgroups-per-qos="true" Apr 23 17:52:05.192708 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190372 2572 flags.go:64] FLAG: --client-ca-file="" Apr 23 17:52:05.192708 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190377 2572 flags.go:64] FLAG: --cloud-config="" Apr 23 17:52:05.192708 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190381 2572 flags.go:64] FLAG: --cloud-provider="external" Apr 23 17:52:05.192708 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190386 2572 flags.go:64] FLAG: --cluster-dns="[]" Apr 23 17:52:05.192708 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190392 2572 flags.go:64] FLAG: --cluster-domain="" Apr 23 17:52:05.192708 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190397 2572 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Apr 23 17:52:05.192708 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190414 2572 flags.go:64] FLAG: --config-dir="" Apr 23 17:52:05.192708 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190417 2572 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Apr 23 17:52:05.192708 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190420 2572 flags.go:64] FLAG: --container-log-max-files="5" Apr 23 17:52:05.192708 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190424 2572 flags.go:64] FLAG: --container-log-max-size="10Mi" Apr 23 17:52:05.192708 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190428 2572 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Apr 23 17:52:05.192708 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190431 2572 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Apr 23 17:52:05.192708 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190435 2572 flags.go:64] FLAG: --containerd-namespace="k8s.io" Apr 23 17:52:05.192708 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190438 2572 flags.go:64] FLAG: --contention-profiling="false" Apr 23 17:52:05.192708 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190441 2572 flags.go:64] FLAG: --cpu-cfs-quota="true" Apr 23 17:52:05.192708 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190444 2572 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Apr 23 17:52:05.192708 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190447 2572 flags.go:64] FLAG: --cpu-manager-policy="none" Apr 23 17:52:05.192708 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190450 2572 flags.go:64] FLAG: --cpu-manager-policy-options="" Apr 23 17:52:05.193333 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190454 2572 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Apr 23 17:52:05.193333 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190457 2572 flags.go:64] FLAG: --enable-controller-attach-detach="true" Apr 23 17:52:05.193333 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190460 2572 flags.go:64] FLAG: --enable-debugging-handlers="true" Apr 23 17:52:05.193333 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190463 2572 flags.go:64] FLAG: --enable-load-reader="false" Apr 23 17:52:05.193333 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190466 2572 flags.go:64] FLAG: --enable-server="true" Apr 23 17:52:05.193333 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190469 2572 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Apr 23 17:52:05.193333 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190473 2572 flags.go:64] FLAG: --event-burst="100" Apr 23 17:52:05.193333 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190476 2572 flags.go:64] FLAG: --event-qps="50" Apr 23 17:52:05.193333 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190479 2572 flags.go:64] FLAG: --event-storage-age-limit="default=0" Apr 23 17:52:05.193333 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190483 2572 flags.go:64] FLAG: --event-storage-event-limit="default=0" Apr 23 17:52:05.193333 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190486 2572 flags.go:64] FLAG: --eviction-hard="" Apr 23 17:52:05.193333 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190491 2572 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Apr 23 17:52:05.193333 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190493 2572 flags.go:64] FLAG: --eviction-minimum-reclaim="" Apr 23 17:52:05.193333 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190497 2572 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Apr 23 17:52:05.193333 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190500 2572 flags.go:64] FLAG: --eviction-soft="" Apr 23 17:52:05.193333 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190503 2572 flags.go:64] FLAG: --eviction-soft-grace-period="" Apr 23 17:52:05.193333 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190506 2572 flags.go:64] FLAG: --exit-on-lock-contention="false" Apr 23 17:52:05.193333 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190509 2572 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Apr 23 17:52:05.193333 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190512 2572 flags.go:64] FLAG: --experimental-mounter-path="" Apr 23 17:52:05.193333 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190515 2572 flags.go:64] FLAG: --fail-cgroupv1="false" Apr 23 17:52:05.193333 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190518 2572 flags.go:64] FLAG: --fail-swap-on="true" Apr 23 17:52:05.193333 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190520 2572 flags.go:64] FLAG: --feature-gates="" Apr 23 17:52:05.193333 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190524 2572 flags.go:64] FLAG: --file-check-frequency="20s" Apr 23 17:52:05.193333 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190527 2572 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Apr 23 17:52:05.193333 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190530 2572 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Apr 23 17:52:05.193948 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190534 2572 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Apr 23 17:52:05.193948 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190537 2572 flags.go:64] FLAG: --healthz-port="10248" Apr 23 17:52:05.193948 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190540 2572 flags.go:64] FLAG: --help="false" Apr 23 17:52:05.193948 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190543 2572 flags.go:64] FLAG: --hostname-override="ip-10-0-133-178.ec2.internal" Apr 23 17:52:05.193948 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190546 2572 flags.go:64] FLAG: --housekeeping-interval="10s" Apr 23 17:52:05.193948 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190549 2572 flags.go:64] FLAG: --http-check-frequency="20s" Apr 23 17:52:05.193948 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190552 2572 flags.go:64] FLAG: --image-credential-provider-bin-dir="/usr/libexec/kubelet-image-credential-provider-plugins" Apr 23 17:52:05.193948 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190555 2572 flags.go:64] FLAG: --image-credential-provider-config="/etc/kubernetes/credential-providers/ecr-credential-provider.yaml" Apr 23 17:52:05.193948 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190559 2572 flags.go:64] FLAG: --image-gc-high-threshold="85" Apr 23 17:52:05.193948 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190562 2572 flags.go:64] FLAG: --image-gc-low-threshold="80" Apr 23 17:52:05.193948 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190565 2572 flags.go:64] FLAG: --image-service-endpoint="" Apr 23 17:52:05.193948 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190567 2572 flags.go:64] FLAG: --kernel-memcg-notification="false" Apr 23 17:52:05.193948 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190570 2572 flags.go:64] FLAG: --kube-api-burst="100" Apr 23 17:52:05.193948 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190573 2572 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Apr 23 17:52:05.193948 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190576 2572 flags.go:64] FLAG: --kube-api-qps="50" Apr 23 17:52:05.193948 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190579 2572 flags.go:64] FLAG: --kube-reserved="" Apr 23 17:52:05.193948 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190582 2572 flags.go:64] FLAG: --kube-reserved-cgroup="" Apr 23 17:52:05.193948 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190586 2572 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Apr 23 17:52:05.193948 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190589 2572 flags.go:64] FLAG: --kubelet-cgroups="" Apr 23 17:52:05.193948 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190592 2572 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Apr 23 17:52:05.193948 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190595 2572 flags.go:64] FLAG: --lock-file="" Apr 23 17:52:05.193948 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190598 2572 flags.go:64] FLAG: --log-cadvisor-usage="false" Apr 23 17:52:05.193948 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190601 2572 flags.go:64] FLAG: --log-flush-frequency="5s" Apr 23 17:52:05.193948 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190604 2572 flags.go:64] FLAG: --log-json-info-buffer-size="0" Apr 23 17:52:05.194552 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190610 2572 flags.go:64] FLAG: --log-json-split-stream="false" Apr 23 17:52:05.194552 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190613 2572 flags.go:64] FLAG: --log-text-info-buffer-size="0" Apr 23 17:52:05.194552 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190616 2572 flags.go:64] FLAG: --log-text-split-stream="false" Apr 23 17:52:05.194552 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190618 2572 flags.go:64] FLAG: --logging-format="text" Apr 23 17:52:05.194552 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190622 2572 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Apr 23 17:52:05.194552 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190625 2572 flags.go:64] FLAG: --make-iptables-util-chains="true" Apr 23 17:52:05.194552 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190628 2572 flags.go:64] FLAG: --manifest-url="" Apr 23 17:52:05.194552 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190631 2572 flags.go:64] FLAG: --manifest-url-header="" Apr 23 17:52:05.194552 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190635 2572 flags.go:64] FLAG: --max-housekeeping-interval="15s" Apr 23 17:52:05.194552 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190638 2572 flags.go:64] FLAG: --max-open-files="1000000" Apr 23 17:52:05.194552 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190642 2572 flags.go:64] FLAG: --max-pods="110" Apr 23 17:52:05.194552 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190645 2572 flags.go:64] FLAG: --maximum-dead-containers="-1" Apr 23 17:52:05.194552 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190648 2572 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Apr 23 17:52:05.194552 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190651 2572 flags.go:64] FLAG: --memory-manager-policy="None" Apr 23 17:52:05.194552 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190654 2572 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Apr 23 17:52:05.194552 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190657 2572 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Apr 23 17:52:05.194552 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190660 2572 flags.go:64] FLAG: --node-ip="0.0.0.0" Apr 23 17:52:05.194552 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190663 2572 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhel" Apr 23 17:52:05.194552 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190670 2572 flags.go:64] FLAG: --node-status-max-images="50" Apr 23 17:52:05.194552 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190673 2572 flags.go:64] FLAG: --node-status-update-frequency="10s" Apr 23 17:52:05.194552 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190676 2572 flags.go:64] FLAG: --oom-score-adj="-999" Apr 23 17:52:05.194552 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190680 2572 flags.go:64] FLAG: --pod-cidr="" Apr 23 17:52:05.194552 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190685 2572 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8cfe89231412ff3ee8cb6207fa0be33cad0f08e88c9c0f1e9f7e8c6f14d6715" Apr 23 17:52:05.195109 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190691 2572 flags.go:64] FLAG: --pod-manifest-path="" Apr 23 17:52:05.195109 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190696 2572 flags.go:64] FLAG: --pod-max-pids="-1" Apr 23 17:52:05.195109 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190700 2572 flags.go:64] FLAG: --pods-per-core="0" Apr 23 17:52:05.195109 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190703 2572 flags.go:64] FLAG: --port="10250" Apr 23 17:52:05.195109 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190707 2572 flags.go:64] FLAG: --protect-kernel-defaults="false" Apr 23 17:52:05.195109 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190709 2572 flags.go:64] FLAG: --provider-id="aws:///us-east-1a/i-0bb5f7376deddf126" Apr 23 17:52:05.195109 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190713 2572 flags.go:64] FLAG: --qos-reserved="" Apr 23 17:52:05.195109 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190716 2572 flags.go:64] FLAG: --read-only-port="10255" Apr 23 17:52:05.195109 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190718 2572 flags.go:64] FLAG: --register-node="true" Apr 23 17:52:05.195109 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190721 2572 flags.go:64] FLAG: --register-schedulable="true" Apr 23 17:52:05.195109 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190724 2572 flags.go:64] FLAG: --register-with-taints="" Apr 23 17:52:05.195109 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190728 2572 flags.go:64] FLAG: --registry-burst="10" Apr 23 17:52:05.195109 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190730 2572 flags.go:64] FLAG: --registry-qps="5" Apr 23 17:52:05.195109 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190733 2572 flags.go:64] FLAG: --reserved-cpus="" Apr 23 17:52:05.195109 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190736 2572 flags.go:64] FLAG: --reserved-memory="" Apr 23 17:52:05.195109 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190740 2572 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Apr 23 17:52:05.195109 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190743 2572 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Apr 23 17:52:05.195109 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190746 2572 flags.go:64] FLAG: --rotate-certificates="false" Apr 23 17:52:05.195109 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190749 2572 flags.go:64] FLAG: --rotate-server-certificates="false" Apr 23 17:52:05.195109 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190754 2572 flags.go:64] FLAG: --runonce="false" Apr 23 17:52:05.195109 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190756 2572 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Apr 23 17:52:05.195109 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190760 2572 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Apr 23 17:52:05.195109 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190763 2572 flags.go:64] FLAG: --seccomp-default="false" Apr 23 17:52:05.195109 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190766 2572 flags.go:64] FLAG: --serialize-image-pulls="true" Apr 23 17:52:05.195109 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190769 2572 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Apr 23 17:52:05.195109 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190772 2572 flags.go:64] FLAG: --storage-driver-db="cadvisor" Apr 23 17:52:05.195727 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190775 2572 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Apr 23 17:52:05.195727 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190778 2572 flags.go:64] FLAG: --storage-driver-password="root" Apr 23 17:52:05.195727 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190781 2572 flags.go:64] FLAG: --storage-driver-secure="false" Apr 23 17:52:05.195727 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190784 2572 flags.go:64] FLAG: --storage-driver-table="stats" Apr 23 17:52:05.195727 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190787 2572 flags.go:64] FLAG: --storage-driver-user="root" Apr 23 17:52:05.195727 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190790 2572 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Apr 23 17:52:05.195727 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190793 2572 flags.go:64] FLAG: --sync-frequency="1m0s" Apr 23 17:52:05.195727 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190797 2572 flags.go:64] FLAG: --system-cgroups="" Apr 23 17:52:05.195727 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190800 2572 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Apr 23 17:52:05.195727 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190805 2572 flags.go:64] FLAG: --system-reserved-cgroup="" Apr 23 17:52:05.195727 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190808 2572 flags.go:64] FLAG: --tls-cert-file="" Apr 23 17:52:05.195727 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190812 2572 flags.go:64] FLAG: --tls-cipher-suites="[]" Apr 23 17:52:05.195727 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190816 2572 flags.go:64] FLAG: --tls-min-version="" Apr 23 17:52:05.195727 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190818 2572 flags.go:64] FLAG: --tls-private-key-file="" Apr 23 17:52:05.195727 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190821 2572 flags.go:64] FLAG: --topology-manager-policy="none" Apr 23 17:52:05.195727 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190824 2572 flags.go:64] FLAG: --topology-manager-policy-options="" Apr 23 17:52:05.195727 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190827 2572 flags.go:64] FLAG: --topology-manager-scope="container" Apr 23 17:52:05.195727 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190830 2572 flags.go:64] FLAG: --v="2" Apr 23 17:52:05.195727 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190834 2572 flags.go:64] FLAG: --version="false" Apr 23 17:52:05.195727 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190838 2572 flags.go:64] FLAG: --vmodule="" Apr 23 17:52:05.195727 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190842 2572 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Apr 23 17:52:05.195727 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.190845 2572 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Apr 23 17:52:05.195727 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190932 2572 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:52:05.195727 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190936 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:52:05.196341 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190957 2572 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:52:05.196341 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190961 2572 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:52:05.196341 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190965 2572 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:52:05.196341 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190968 2572 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:52:05.196341 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190971 2572 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:52:05.196341 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190974 2572 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:52:05.196341 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190977 2572 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:52:05.196341 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190980 2572 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:52:05.196341 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190982 2572 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:52:05.196341 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190985 2572 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:52:05.196341 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190988 2572 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:52:05.196341 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190991 2572 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:52:05.196341 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190994 2572 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:52:05.196341 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190996 2572 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:52:05.196341 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.190999 2572 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:52:05.196341 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191002 2572 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:52:05.196341 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191005 2572 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:52:05.196341 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191007 2572 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:52:05.196341 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191009 2572 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:52:05.196341 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191012 2572 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:52:05.197281 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191015 2572 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:52:05.197281 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191017 2572 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:52:05.197281 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191020 2572 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:52:05.197281 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191022 2572 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:52:05.197281 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191025 2572 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:52:05.197281 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191027 2572 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:52:05.197281 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191030 2572 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:52:05.197281 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191033 2572 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:52:05.197281 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191035 2572 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:52:05.197281 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191037 2572 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:52:05.197281 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191040 2572 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:52:05.197281 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191042 2572 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:52:05.197281 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191045 2572 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:52:05.197281 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191047 2572 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:52:05.197281 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191050 2572 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:52:05.197281 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191053 2572 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:52:05.197281 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191055 2572 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:52:05.197281 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191058 2572 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:52:05.197281 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191060 2572 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:52:05.197281 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191063 2572 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:52:05.198156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191065 2572 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:52:05.198156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191068 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:52:05.198156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191070 2572 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:52:05.198156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191073 2572 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:52:05.198156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191075 2572 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:52:05.198156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191078 2572 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:52:05.198156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191081 2572 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:52:05.198156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191085 2572 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:52:05.198156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191088 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:52:05.198156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191092 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:52:05.198156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191094 2572 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:52:05.198156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191097 2572 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:52:05.198156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191099 2572 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:52:05.198156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191102 2572 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:52:05.198156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191106 2572 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:52:05.198156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191109 2572 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:52:05.198156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191112 2572 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:52:05.198156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191115 2572 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:52:05.198156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191117 2572 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:52:05.198703 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191120 2572 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:52:05.198703 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191123 2572 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:52:05.198703 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191125 2572 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:52:05.198703 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191128 2572 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:52:05.198703 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191131 2572 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:52:05.198703 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191133 2572 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:52:05.198703 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191136 2572 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:52:05.198703 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191138 2572 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:52:05.198703 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191141 2572 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:52:05.198703 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191144 2572 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:52:05.198703 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191146 2572 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:52:05.198703 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191149 2572 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:52:05.198703 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191154 2572 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:52:05.198703 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191156 2572 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:52:05.198703 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191159 2572 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:52:05.198703 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191161 2572 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:52:05.198703 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191164 2572 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:52:05.198703 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191166 2572 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:52:05.198703 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191169 2572 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:52:05.198703 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191172 2572 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:52:05.199463 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191175 2572 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:52:05.199463 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191178 2572 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:52:05.199463 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191180 2572 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:52:05.199463 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191182 2572 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:52:05.199463 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.191185 2572 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:52:05.199463 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.191190 2572 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 23 17:52:05.199463 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.198912 2572 server.go:530] "Kubelet version" kubeletVersion="v1.33.9" Apr 23 17:52:05.199463 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.198941 2572 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 23 17:52:05.199463 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199018 2572 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:52:05.199463 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199027 2572 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:52:05.199463 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199032 2572 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:52:05.199463 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199036 2572 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:52:05.199463 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199041 2572 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:52:05.199463 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199046 2572 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:52:05.199463 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199050 2572 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:52:05.199463 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199054 2572 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:52:05.200151 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199061 2572 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:52:05.200151 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199069 2572 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:52:05.200151 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199074 2572 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:52:05.200151 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199079 2572 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:52:05.200151 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199084 2572 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:52:05.200151 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199090 2572 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:52:05.200151 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199094 2572 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:52:05.200151 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199098 2572 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:52:05.200151 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199102 2572 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:52:05.200151 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199106 2572 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:52:05.200151 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199110 2572 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:52:05.200151 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199116 2572 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:52:05.200151 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199120 2572 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:52:05.200151 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199124 2572 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:52:05.200151 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199128 2572 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:52:05.200151 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199133 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:52:05.200151 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199137 2572 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:52:05.200151 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199141 2572 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:52:05.200151 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199144 2572 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:52:05.200734 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199149 2572 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:52:05.200734 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199153 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:52:05.200734 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199157 2572 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:52:05.200734 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199161 2572 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:52:05.200734 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199166 2572 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:52:05.200734 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199170 2572 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:52:05.200734 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199174 2572 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:52:05.200734 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199178 2572 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:52:05.200734 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199182 2572 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:52:05.200734 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199186 2572 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:52:05.200734 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199190 2572 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:52:05.200734 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199194 2572 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:52:05.200734 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199198 2572 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:52:05.200734 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199202 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:52:05.200734 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199207 2572 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:52:05.200734 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199211 2572 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:52:05.200734 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199215 2572 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:52:05.200734 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199219 2572 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:52:05.200734 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199223 2572 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:52:05.201285 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199228 2572 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:52:05.201285 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199235 2572 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:52:05.201285 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199240 2572 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:52:05.201285 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199245 2572 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:52:05.201285 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199250 2572 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:52:05.201285 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199254 2572 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:52:05.201285 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199258 2572 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:52:05.201285 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199262 2572 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:52:05.201285 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199266 2572 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:52:05.201285 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199270 2572 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:52:05.201285 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199275 2572 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:52:05.201285 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199278 2572 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:52:05.201285 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199283 2572 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:52:05.201285 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199287 2572 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:52:05.201285 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199291 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:52:05.201285 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199295 2572 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:52:05.201285 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199299 2572 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:52:05.201285 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199304 2572 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:52:05.201285 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199308 2572 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:52:05.201285 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199313 2572 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:52:05.202156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199317 2572 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:52:05.202156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199321 2572 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:52:05.202156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199326 2572 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:52:05.202156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199331 2572 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:52:05.202156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199335 2572 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:52:05.202156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199340 2572 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:52:05.202156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199344 2572 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:52:05.202156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199348 2572 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:52:05.202156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199353 2572 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:52:05.202156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199357 2572 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:52:05.202156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199361 2572 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:52:05.202156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199365 2572 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:52:05.202156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199369 2572 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:52:05.202156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199374 2572 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:52:05.202156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199379 2572 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:52:05.202156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199384 2572 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:52:05.202156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199388 2572 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:52:05.202156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199392 2572 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:52:05.202156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199415 2572 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:52:05.202156 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199420 2572 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:52:05.203013 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.199428 2572 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 23 17:52:05.203013 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199618 2572 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:52:05.203013 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199627 2572 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:52:05.203013 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199632 2572 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:52:05.203013 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199637 2572 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:52:05.203013 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199641 2572 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:52:05.203013 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199645 2572 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:52:05.203013 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199649 2572 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:52:05.203013 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199653 2572 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:52:05.203013 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199658 2572 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:52:05.203013 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199662 2572 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:52:05.203013 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199666 2572 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:52:05.203013 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199670 2572 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:52:05.203013 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199675 2572 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:52:05.203013 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199679 2572 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:52:05.203013 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199683 2572 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:52:05.203513 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199688 2572 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:52:05.203513 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199691 2572 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:52:05.203513 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199695 2572 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:52:05.203513 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199699 2572 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:52:05.203513 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199703 2572 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:52:05.203513 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199708 2572 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:52:05.203513 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199712 2572 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:52:05.203513 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199716 2572 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:52:05.203513 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199720 2572 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:52:05.203513 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199724 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:52:05.203513 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199729 2572 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:52:05.203513 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199733 2572 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:52:05.203513 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199738 2572 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:52:05.203513 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199744 2572 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:52:05.203513 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199750 2572 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:52:05.203513 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199756 2572 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:52:05.203513 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199762 2572 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:52:05.203513 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199767 2572 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:52:05.203513 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199771 2572 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:52:05.204064 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199776 2572 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:52:05.204064 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199780 2572 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:52:05.204064 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199785 2572 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:52:05.204064 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199789 2572 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:52:05.204064 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199794 2572 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:52:05.204064 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199799 2572 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:52:05.204064 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199803 2572 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:52:05.204064 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199807 2572 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:52:05.204064 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199812 2572 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:52:05.204064 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199816 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:52:05.204064 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199820 2572 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:52:05.204064 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199824 2572 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:52:05.204064 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199828 2572 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:52:05.204064 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199832 2572 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:52:05.204064 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199837 2572 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:52:05.204064 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199840 2572 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:52:05.204064 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199844 2572 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:52:05.204064 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199849 2572 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:52:05.204064 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199853 2572 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:52:05.204064 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199857 2572 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:52:05.204667 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199861 2572 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:52:05.204667 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199866 2572 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:52:05.204667 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199869 2572 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:52:05.204667 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199874 2572 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:52:05.204667 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199879 2572 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:52:05.204667 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199884 2572 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:52:05.204667 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199888 2572 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:52:05.204667 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199892 2572 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:52:05.204667 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199896 2572 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:52:05.204667 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199900 2572 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:52:05.204667 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199904 2572 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:52:05.204667 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199908 2572 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:52:05.204667 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199912 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:52:05.204667 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199916 2572 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:52:05.204667 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199921 2572 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:52:05.204667 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199925 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:52:05.204667 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199929 2572 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:52:05.204667 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199933 2572 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:52:05.204667 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199937 2572 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:52:05.205148 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199942 2572 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:52:05.205148 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199946 2572 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:52:05.205148 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199950 2572 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:52:05.205148 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199954 2572 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:52:05.205148 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199958 2572 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:52:05.205148 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199962 2572 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:52:05.205148 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199966 2572 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:52:05.205148 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199971 2572 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:52:05.205148 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199975 2572 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:52:05.205148 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199979 2572 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:52:05.205148 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199983 2572 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:52:05.205148 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199987 2572 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:52:05.205148 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.199991 2572 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:52:05.205148 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.199998 2572 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 23 17:52:05.205148 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.200838 2572 server.go:962] "Client rotation is on, will bootstrap in background" Apr 23 17:52:05.205582 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.203969 2572 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Apr 23 17:52:05.205582 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.205203 2572 server.go:1019] "Starting client certificate rotation" Apr 23 17:52:05.205582 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.205303 2572 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 23 17:52:05.205582 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.205338 2572 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 23 17:52:05.234634 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.234604 2572 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 23 17:52:05.240288 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.240263 2572 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 23 17:52:05.255504 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.255478 2572 log.go:25] "Validated CRI v1 runtime API" Apr 23 17:52:05.261353 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.261332 2572 log.go:25] "Validated CRI v1 image API" Apr 23 17:52:05.262737 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.262719 2572 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 23 17:52:05.265499 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.265481 2572 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 23 17:52:05.267038 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.267020 2572 fs.go:135] Filesystem UUIDs: map[73bb4e0a-6c90-4501-8423-382d5cf4cde7:/dev/nvme0n1p3 7B77-95E7:/dev/nvme0n1p2 e228e37d-4a4d-49e2-b7f0-42c78baa8b58:/dev/nvme0n1p4] Apr 23 17:52:05.267082 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.267039 2572 fs.go:136] Filesystem partitions: map[/dev/nvme0n1p3:{mountpoint:/boot major:259 minor:3 fsType:ext4 blockSize:0} /dev/nvme0n1p4:{mountpoint:/var major:259 minor:4 fsType:xfs blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Apr 23 17:52:05.273714 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.273602 2572 manager.go:217] Machine: {Timestamp:2026-04-23 17:52:05.271506387 +0000 UTC m=+0.466411105 CPUVendorID:GenuineIntel NumCores:8 NumPhysicalCores:4 NumSockets:1 CpuFrequency:3114181 MemoryCapacity:33164496896 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ec2e752764adf18e124e03d0ff1e9586 SystemUUID:ec2e7527-64ad-f18e-124e-03d0ff1e9586 BootID:461aad3f-e561-4282-962b-0adeb3c51a7f Filesystems:[{Device:/dev/nvme0n1p3 DeviceMajor:259 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16582246400 Type:vfs Inodes:4048400 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6632902656 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/nvme0n1p4 DeviceMajor:259 DeviceMinor:4 Capacity:128243970048 Type:vfs Inodes:62651840 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6103040 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16582250496 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[259:0:{Name:nvme0n1 Major:259 Minor:0 Size:128849018880 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:02:c6:fe:28:a4:09 Speed:0 Mtu:9001} {Name:ens5 MacAddress:02:c6:fe:28:a4:09 Speed:0 Mtu:9001} {Name:ovs-system MacAddress:de:ed:6d:ea:e8:56 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33164496896 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0 4] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:1 Threads:[1 5] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:2 Threads:[2 6] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:3 Threads:[3 7] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:}] Caches:[{Id:0 Size:37486592 Type:Unified Level:3}] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Apr 23 17:52:05.273714 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.273709 2572 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Apr 23 17:52:05.273832 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.273796 2572 manager.go:233] Version: {KernelVersion:5.14.0-570.107.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20260414-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Apr 23 17:52:05.275912 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.275890 2572 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 23 17:52:05.276060 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.275914 2572 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-10-0-133-178.ec2.internal","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 23 17:52:05.276151 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.276070 2572 topology_manager.go:138] "Creating topology manager with none policy" Apr 23 17:52:05.276151 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.276078 2572 container_manager_linux.go:306] "Creating device plugin manager" Apr 23 17:52:05.276151 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.276094 2572 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 23 17:52:05.277178 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.277168 2572 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 23 17:52:05.278105 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.278095 2572 state_mem.go:36] "Initialized new in-memory state store" Apr 23 17:52:05.278375 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.278365 2572 server.go:1267] "Using root directory" path="/var/lib/kubelet" Apr 23 17:52:05.280993 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.280983 2572 kubelet.go:491] "Attempting to sync node with API server" Apr 23 17:52:05.281039 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.280999 2572 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 23 17:52:05.281039 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.281012 2572 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Apr 23 17:52:05.281039 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.281021 2572 kubelet.go:397] "Adding apiserver pod source" Apr 23 17:52:05.281039 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.281030 2572 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 23 17:52:05.282303 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.282289 2572 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 23 17:52:05.282367 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.282310 2572 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 23 17:52:05.286604 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.286587 2572 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.10-2.rhaos4.20.gita4d0894.el9" apiVersion="v1" Apr 23 17:52:05.288458 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.288442 2572 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 23 17:52:05.290006 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.289991 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Apr 23 17:52:05.290077 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.290011 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Apr 23 17:52:05.290077 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.290021 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Apr 23 17:52:05.290077 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.290030 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Apr 23 17:52:05.290077 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.290038 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Apr 23 17:52:05.290077 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.290047 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Apr 23 17:52:05.290077 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.290057 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Apr 23 17:52:05.290077 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.290066 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Apr 23 17:52:05.290077 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.290076 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Apr 23 17:52:05.290334 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.290085 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Apr 23 17:52:05.290334 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.290111 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Apr 23 17:52:05.290334 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.290125 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Apr 23 17:52:05.291863 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.291851 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Apr 23 17:52:05.291914 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.291867 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Apr 23 17:52:05.292811 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.292785 2572 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:52:05.292951 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.292922 2572 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:52:05.293084 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.293068 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:05.295503 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.295489 2572 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 23 17:52:05.295590 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.295532 2572 server.go:1295] "Started kubelet" Apr 23 17:52:05.295638 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.295613 2572 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 23 17:52:05.296174 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.296129 2572 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 23 17:52:05.296234 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.296190 2572 server_v1.go:47] "podresources" method="list" useActivePods=true Apr 23 17:52:05.296484 ip-10-0-133-178 systemd[1]: Started Kubernetes Kubelet. Apr 23 17:52:05.298349 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.298335 2572 server.go:317] "Adding debug handlers to kubelet server" Apr 23 17:52:05.299549 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.299532 2572 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 23 17:52:05.304757 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.304739 2572 kubelet.go:1618] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Apr 23 17:52:05.305229 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.305197 2572 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Apr 23 17:52:05.305800 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.305781 2572 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 23 17:52:05.307766 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.307701 2572 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-133-178.ec2.internal\" not found" Apr 23 17:52:05.308087 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.308072 2572 factory.go:55] Registering systemd factory Apr 23 17:52:05.308148 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.308098 2572 factory.go:223] Registration of the systemd container factory successfully Apr 23 17:52:05.308414 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.308382 2572 volume_manager.go:295] "The desired_state_of_world populator starts" Apr 23 17:52:05.308536 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.308527 2572 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 23 17:52:05.309955 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.308507 2572 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 23 17:52:05.309955 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.308633 2572 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-8frxh" Apr 23 17:52:05.309955 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.309640 2572 factory.go:153] Registering CRI-O factory Apr 23 17:52:05.309955 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.309655 2572 factory.go:223] Registration of the crio container factory successfully Apr 23 17:52:05.309955 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.309707 2572 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Apr 23 17:52:05.309955 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.309846 2572 reconstruct.go:97] "Volume reconstruction finished" Apr 23 17:52:05.309955 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.309856 2572 reconciler.go:26] "Reconciler: start to sync state" Apr 23 17:52:05.310365 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.309962 2572 factory.go:103] Registering Raw factory Apr 23 17:52:05.310365 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.310023 2572 manager.go:1196] Started watching for new ooms in manager Apr 23 17:52:05.310900 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.310881 2572 manager.go:319] Starting recovery of all containers Apr 23 17:52:05.311349 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.311319 2572 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:52:05.311961 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.311934 2572 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Apr 23 17:52:05.312385 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.311243 2572 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-133-178.ec2.internal.18a90dd266ed709f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-133-178.ec2.internal,UID:ip-10-0-133-178.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:05.295501471 +0000 UTC m=+0.490406191,LastTimestamp:2026-04-23 17:52:05.295501471 +0000 UTC m=+0.490406191,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:05.319209 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:05.319033 2572 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/system.slice/systemd-update-utmp-runlevel.service/memory.max": read /sys/fs/cgroup/system.slice/systemd-update-utmp-runlevel.service/memory.max: no such device Apr 23 17:52:05.320708 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.320695 2572 manager.go:324] Recovery completed Apr 23 17:52:05.324670 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.324657 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:05.327291 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.327275 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:05.327364 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.327304 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:05.327364 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.327317 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:05.327781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.327766 2572 cpu_manager.go:222] "Starting CPU manager" policy="none" Apr 23 17:52:05.327781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.327779 2572 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Apr 23 17:52:05.327909 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.327796 2572 state_mem.go:36] "Initialized new in-memory state store" Apr 23 17:52:05.329005 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.328916 2572 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-133-178.ec2.internal.18a90dd268d27e15 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-133-178.ec2.internal,UID:ip-10-0-133-178.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-133-178.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:05.327289877 +0000 UTC m=+0.522194595,LastTimestamp:2026-04-23 17:52:05.327289877 +0000 UTC m=+0.522194595,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:05.330356 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.330343 2572 policy_none.go:49] "None policy: Start" Apr 23 17:52:05.330427 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.330360 2572 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 23 17:52:05.330427 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.330371 2572 state_mem.go:35] "Initializing new in-memory state store" Apr 23 17:52:05.340634 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.340566 2572 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-133-178.ec2.internal.18a90dd268d2d217 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-133-178.ec2.internal,UID:ip-10-0-133-178.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-133-178.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:05.327311383 +0000 UTC m=+0.522216104,LastTimestamp:2026-04-23 17:52:05.327311383 +0000 UTC m=+0.522216104,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:05.353818 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.353739 2572 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-133-178.ec2.internal.18a90dd268d2facf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-133-178.ec2.internal,UID:ip-10-0-133-178.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-133-178.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:05.327321807 +0000 UTC m=+0.522226526,LastTimestamp:2026-04-23 17:52:05.327321807 +0000 UTC m=+0.522226526,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:05.378945 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.378931 2572 manager.go:341] "Starting Device Plugin manager" Apr 23 17:52:05.383076 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.378968 2572 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 23 17:52:05.383076 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.378981 2572 server.go:85] "Starting device plugin registration server" Apr 23 17:52:05.383076 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.379222 2572 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 23 17:52:05.383076 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.379236 2572 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 23 17:52:05.383076 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.379333 2572 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Apr 23 17:52:05.383076 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.379433 2572 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Apr 23 17:52:05.383076 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.379442 2572 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 23 17:52:05.383076 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.379967 2572 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Apr 23 17:52:05.383076 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.379999 2572 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-133-178.ec2.internal\" not found" Apr 23 17:52:05.391225 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.391160 2572 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-133-178.ec2.internal.18a90dd26c0a2e87 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-133-178.ec2.internal,UID:ip-10-0-133-178.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:05.381271175 +0000 UTC m=+0.576175881,LastTimestamp:2026-04-23 17:52:05.381271175 +0000 UTC m=+0.576175881,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:05.412217 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.412167 2572 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 23 17:52:05.413414 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.413374 2572 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 23 17:52:05.413414 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.413418 2572 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 23 17:52:05.413583 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.413438 2572 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 23 17:52:05.413583 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.413445 2572 kubelet.go:2451] "Starting kubelet main sync loop" Apr 23 17:52:05.413583 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.413478 2572 kubelet.go:2475] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Apr 23 17:52:05.425271 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.425242 2572 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 17:52:05.479578 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.479516 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:05.480496 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.480469 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:05.480593 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.480503 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:05.480593 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.480517 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:05.480593 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.480541 2572 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:05.493000 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.492911 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-133-178.ec2.internal.18a90dd268d27e15\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-133-178.ec2.internal.18a90dd268d27e15 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-133-178.ec2.internal,UID:ip-10-0-133-178.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-133-178.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:05.327289877 +0000 UTC m=+0.522194595,LastTimestamp:2026-04-23 17:52:05.480486569 +0000 UTC m=+0.675391287,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:05.500544 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.500483 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-133-178.ec2.internal.18a90dd268d2d217\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-133-178.ec2.internal.18a90dd268d2d217 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-133-178.ec2.internal,UID:ip-10-0-133-178.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-133-178.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:05.327311383 +0000 UTC m=+0.522216104,LastTimestamp:2026-04-23 17:52:05.480510577 +0000 UTC m=+0.675415295,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:05.500656 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.500589 2572 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:05.502735 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.502681 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-133-178.ec2.internal.18a90dd268d2facf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-133-178.ec2.internal.18a90dd268d2facf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-133-178.ec2.internal,UID:ip-10-0-133-178.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-133-178.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:05.327321807 +0000 UTC m=+0.522226526,LastTimestamp:2026-04-23 17:52:05.480520873 +0000 UTC m=+0.675425591,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:05.514211 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.514164 2572 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal","kube-system/kube-apiserver-proxy-ip-10-0-133-178.ec2.internal"] Apr 23 17:52:05.514283 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.514252 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:05.514336 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.514303 2572 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Apr 23 17:52:05.515790 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.515776 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:05.515854 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.515803 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:05.515854 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.515816 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:05.517148 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.517134 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:05.517293 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.517279 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal" Apr 23 17:52:05.517340 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.517308 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:05.519544 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.519529 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:05.519617 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.519558 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:05.519617 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.519567 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:05.519617 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.519587 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:05.519617 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.519609 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:05.519617 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.519618 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:05.520744 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.520732 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-133-178.ec2.internal" Apr 23 17:52:05.520786 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.520755 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:05.521955 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.521941 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:05.522044 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.521959 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:05.522044 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.521969 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:05.528821 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.528753 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-133-178.ec2.internal.18a90dd268d27e15\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-133-178.ec2.internal.18a90dd268d27e15 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-133-178.ec2.internal,UID:ip-10-0-133-178.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-133-178.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:05.327289877 +0000 UTC m=+0.522194595,LastTimestamp:2026-04-23 17:52:05.515789019 +0000 UTC m=+0.710693737,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:05.538423 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.538350 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-133-178.ec2.internal.18a90dd268d2d217\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-133-178.ec2.internal.18a90dd268d2d217 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-133-178.ec2.internal,UID:ip-10-0-133-178.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-133-178.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:05.327311383 +0000 UTC m=+0.522216104,LastTimestamp:2026-04-23 17:52:05.51581049 +0000 UTC m=+0.710715209,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:05.545480 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.545349 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-133-178.ec2.internal.18a90dd268d2facf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-133-178.ec2.internal.18a90dd268d2facf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-133-178.ec2.internal,UID:ip-10-0-133-178.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-133-178.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:05.327321807 +0000 UTC m=+0.522226526,LastTimestamp:2026-04-23 17:52:05.515820471 +0000 UTC m=+0.710725189,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:05.546137 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.546122 2572 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-133-178.ec2.internal\" not found" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:05.550358 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.550344 2572 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-133-178.ec2.internal\" not found" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:05.553918 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.553859 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-133-178.ec2.internal.18a90dd268d27e15\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-133-178.ec2.internal.18a90dd268d27e15 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-133-178.ec2.internal,UID:ip-10-0-133-178.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-133-178.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:05.327289877 +0000 UTC m=+0.522194595,LastTimestamp:2026-04-23 17:52:05.519544511 +0000 UTC m=+0.714449235,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:05.560531 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.560473 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-133-178.ec2.internal.18a90dd268d2d217\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-133-178.ec2.internal.18a90dd268d2d217 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-133-178.ec2.internal,UID:ip-10-0-133-178.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-133-178.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:05.327311383 +0000 UTC m=+0.522216104,LastTimestamp:2026-04-23 17:52:05.519562778 +0000 UTC m=+0.714467495,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:05.568632 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.568569 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-133-178.ec2.internal.18a90dd268d2facf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-133-178.ec2.internal.18a90dd268d2facf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-133-178.ec2.internal,UID:ip-10-0-133-178.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-133-178.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:05.327321807 +0000 UTC m=+0.522226526,LastTimestamp:2026-04-23 17:52:05.519571251 +0000 UTC m=+0.714475970,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:05.574946 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.574891 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-133-178.ec2.internal.18a90dd268d27e15\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-133-178.ec2.internal.18a90dd268d27e15 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-133-178.ec2.internal,UID:ip-10-0-133-178.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-133-178.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:05.327289877 +0000 UTC m=+0.522194595,LastTimestamp:2026-04-23 17:52:05.519600839 +0000 UTC m=+0.714505557,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:05.585872 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.585813 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-133-178.ec2.internal.18a90dd268d2d217\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-133-178.ec2.internal.18a90dd268d2d217 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-133-178.ec2.internal,UID:ip-10-0-133-178.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-133-178.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:05.327311383 +0000 UTC m=+0.522216104,LastTimestamp:2026-04-23 17:52:05.519613814 +0000 UTC m=+0.714518532,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:05.595091 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.595025 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-133-178.ec2.internal.18a90dd268d2facf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-133-178.ec2.internal.18a90dd268d2facf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-133-178.ec2.internal,UID:ip-10-0-133-178.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-133-178.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:05.327321807 +0000 UTC m=+0.522226526,LastTimestamp:2026-04-23 17:52:05.519622585 +0000 UTC m=+0.714527302,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:05.604684 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.604615 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-133-178.ec2.internal.18a90dd268d27e15\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-133-178.ec2.internal.18a90dd268d27e15 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-133-178.ec2.internal,UID:ip-10-0-133-178.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-133-178.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:05.327289877 +0000 UTC m=+0.522194595,LastTimestamp:2026-04-23 17:52:05.521952048 +0000 UTC m=+0.716856765,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:05.611868 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.611846 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/422f53ca0cc951b394e5e5ec59460e85-config\") pod \"kube-apiserver-proxy-ip-10-0-133-178.ec2.internal\" (UID: \"422f53ca0cc951b394e5e5ec59460e85\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-133-178.ec2.internal" Apr 23 17:52:05.611955 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.611874 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/2e27a1d033408744b4b8c34c52f01b43-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal\" (UID: \"2e27a1d033408744b4b8c34c52f01b43\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal" Apr 23 17:52:05.611955 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.611894 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2e27a1d033408744b4b8c34c52f01b43-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal\" (UID: \"2e27a1d033408744b4b8c34c52f01b43\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal" Apr 23 17:52:05.613256 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.613194 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-133-178.ec2.internal.18a90dd268d2d217\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-133-178.ec2.internal.18a90dd268d2d217 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-133-178.ec2.internal,UID:ip-10-0-133-178.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-133-178.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:05.327311383 +0000 UTC m=+0.522216104,LastTimestamp:2026-04-23 17:52:05.521964373 +0000 UTC m=+0.716869092,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:05.623670 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.623612 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-133-178.ec2.internal.18a90dd268d2facf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-133-178.ec2.internal.18a90dd268d2facf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-133-178.ec2.internal,UID:ip-10-0-133-178.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-133-178.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:05.327321807 +0000 UTC m=+0.522226526,LastTimestamp:2026-04-23 17:52:05.521973274 +0000 UTC m=+0.716877992,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:05.700778 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.700747 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:05.701721 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.701703 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:05.701794 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.701734 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:05.701794 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.701763 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:05.701794 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.701788 2572 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:05.709066 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.709001 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-133-178.ec2.internal.18a90dd268d27e15\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-133-178.ec2.internal.18a90dd268d27e15 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-133-178.ec2.internal,UID:ip-10-0-133-178.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-133-178.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:05.327289877 +0000 UTC m=+0.522194595,LastTimestamp:2026-04-23 17:52:05.701719492 +0000 UTC m=+0.896624209,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:05.711610 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.711548 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-133-178.ec2.internal.18a90dd268d2d217\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-133-178.ec2.internal.18a90dd268d2d217 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-133-178.ec2.internal,UID:ip-10-0-133-178.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-133-178.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:05.327311383 +0000 UTC m=+0.522216104,LastTimestamp:2026-04-23 17:52:05.701738325 +0000 UTC m=+0.896643043,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:05.711711 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.711689 2572 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:05.712136 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.712116 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/2e27a1d033408744b4b8c34c52f01b43-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal\" (UID: \"2e27a1d033408744b4b8c34c52f01b43\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal" Apr 23 17:52:05.712173 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.712156 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2e27a1d033408744b4b8c34c52f01b43-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal\" (UID: \"2e27a1d033408744b4b8c34c52f01b43\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal" Apr 23 17:52:05.712226 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.712185 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/422f53ca0cc951b394e5e5ec59460e85-config\") pod \"kube-apiserver-proxy-ip-10-0-133-178.ec2.internal\" (UID: \"422f53ca0cc951b394e5e5ec59460e85\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-133-178.ec2.internal" Apr 23 17:52:05.712278 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.712222 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/2e27a1d033408744b4b8c34c52f01b43-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal\" (UID: \"2e27a1d033408744b4b8c34c52f01b43\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal" Apr 23 17:52:05.712278 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.712233 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2e27a1d033408744b4b8c34c52f01b43-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal\" (UID: \"2e27a1d033408744b4b8c34c52f01b43\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal" Apr 23 17:52:05.712278 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.712241 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/422f53ca0cc951b394e5e5ec59460e85-config\") pod \"kube-apiserver-proxy-ip-10-0-133-178.ec2.internal\" (UID: \"422f53ca0cc951b394e5e5ec59460e85\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-133-178.ec2.internal" Apr 23 17:52:05.719058 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.719001 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-133-178.ec2.internal.18a90dd268d2facf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-133-178.ec2.internal.18a90dd268d2facf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-133-178.ec2.internal,UID:ip-10-0-133-178.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-10-0-133-178.ec2.internal status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:05.327321807 +0000 UTC m=+0.522226526,LastTimestamp:2026-04-23 17:52:05.701767605 +0000 UTC m=+0.896672323,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:05.848967 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.848873 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal" Apr 23 17:52:05.852517 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:05.852497 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-133-178.ec2.internal" Apr 23 17:52:05.924320 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:05.924294 2572 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Apr 23 17:52:06.112323 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:06.112249 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:06.113638 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:06.113622 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:06.113728 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:06.113653 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:06.113728 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:06.113668 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:06.113728 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:06.113699 2572 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:06.120354 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:06.120278 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-133-178.ec2.internal.18a90dd268d27e15\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-133-178.ec2.internal.18a90dd268d27e15 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-133-178.ec2.internal,UID:ip-10-0-133-178.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-133-178.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:05.327289877 +0000 UTC m=+0.522194595,LastTimestamp:2026-04-23 17:52:06.113639522 +0000 UTC m=+1.308544239,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:06.129506 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:06.129481 2572 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:06.129570 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:06.129448 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"ip-10-0-133-178.ec2.internal.18a90dd268d2d217\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-133-178.ec2.internal.18a90dd268d2d217 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-133-178.ec2.internal,UID:ip-10-0-133-178.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-133-178.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:05.327311383 +0000 UTC m=+0.522216104,LastTimestamp:2026-04-23 17:52:06.113657558 +0000 UTC m=+1.308562276,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:06.293034 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:06.292999 2572 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:52:06.303124 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:06.303104 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:06.502497 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:06.502470 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod422f53ca0cc951b394e5e5ec59460e85.slice/crio-a501455b997744cd4973470354957e5d84b2a1d2b8301762a1f610b5f6575d02 WatchSource:0}: Error finding container a501455b997744cd4973470354957e5d84b2a1d2b8301762a1f610b5f6575d02: Status 404 returned error can't find the container with id a501455b997744cd4973470354957e5d84b2a1d2b8301762a1f610b5f6575d02 Apr 23 17:52:06.502982 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:52:06.502968 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e27a1d033408744b4b8c34c52f01b43.slice/crio-daef8fa1d3e71f55f2e920b071246855540918b7df362d3fd3f3076e05582df6 WatchSource:0}: Error finding container daef8fa1d3e71f55f2e920b071246855540918b7df362d3fd3f3076e05582df6: Status 404 returned error can't find the container with id daef8fa1d3e71f55f2e920b071246855540918b7df362d3fd3f3076e05582df6 Apr 23 17:52:06.507727 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:06.507709 2572 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 17:52:06.515628 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:06.515560 2572 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{kube-apiserver-proxy-ip-10-0-133-178.ec2.internal.18a90dd2af323719 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-proxy-ip-10-0-133-178.ec2.internal,UID:422f53ca0cc951b394e5e5ec59460e85,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{haproxy},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:294af5c64228434d1ed6ee8ea3ac802e3c999aa847223e3b2efa18425a9fe421\",Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:06.507968281 +0000 UTC m=+1.702872986,LastTimestamp:2026-04-23 17:52:06.507968281 +0000 UTC m=+1.702872986,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:06.524179 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:06.524108 2572 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd2af33204d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal,UID:2e27a1d033408744b4b8c34c52f01b43,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\",Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:06.508027981 +0000 UTC m=+1.702932686,LastTimestamp:2026-04-23 17:52:06.508027981 +0000 UTC m=+1.702932686,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:06.612540 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:06.612508 2572 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:52:06.702274 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:06.702200 2572 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:52:06.729689 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:06.729664 2572 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="1.6s" Apr 23 17:52:06.729821 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:06.729705 2572 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 17:52:06.930459 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:06.930421 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:06.931693 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:06.931669 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:06.931809 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:06.931707 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:06.931809 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:06.931722 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:06.931809 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:06.931761 2572 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:06.948297 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:06.948259 2572 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:07.302744 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:07.302707 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:07.418178 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:07.418114 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal" event={"ID":"2e27a1d033408744b4b8c34c52f01b43","Type":"ContainerStarted","Data":"daef8fa1d3e71f55f2e920b071246855540918b7df362d3fd3f3076e05582df6"} Apr 23 17:52:07.419189 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:07.419159 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-133-178.ec2.internal" event={"ID":"422f53ca0cc951b394e5e5ec59460e85","Type":"ContainerStarted","Data":"a501455b997744cd4973470354957e5d84b2a1d2b8301762a1f610b5f6575d02"} Apr 23 17:52:08.057629 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:08.057520 2572 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{kube-apiserver-proxy-ip-10-0-133-178.ec2.internal.18a90dd30b053aa2 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-proxy-ip-10-0-133-178.ec2.internal,UID:422f53ca0cc951b394e5e5ec59460e85,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{haproxy},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:294af5c64228434d1ed6ee8ea3ac802e3c999aa847223e3b2efa18425a9fe421\" in 1.54s (1.54s including waiting). Image size: 488332864 bytes.,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:08.048523938 +0000 UTC m=+3.243428656,LastTimestamp:2026-04-23 17:52:08.048523938 +0000 UTC m=+3.243428656,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:08.067258 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:08.067167 2572 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd30b10dda6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal,UID:2e27a1d033408744b4b8c34c52f01b43,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\" in 1.541s (1.541s including waiting). Image size: 468435751 bytes.,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:08.049286566 +0000 UTC m=+3.244191291,LastTimestamp:2026-04-23 17:52:08.049286566 +0000 UTC m=+3.244191291,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:08.131283 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:08.131211 2572 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{kube-apiserver-proxy-ip-10-0-133-178.ec2.internal.18a90dd30f7c176b kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-proxy-ip-10-0-133-178.ec2.internal,UID:422f53ca0cc951b394e5e5ec59460e85,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{haproxy},},Reason:Created,Message:Created container: haproxy,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:08.123422571 +0000 UTC m=+3.318327289,LastTimestamp:2026-04-23 17:52:08.123422571 +0000 UTC m=+3.318327289,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:08.142263 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:08.142170 2572 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{kube-apiserver-proxy-ip-10-0-133-178.ec2.internal.18a90dd30fe08d24 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-proxy-ip-10-0-133-178.ec2.internal,UID:422f53ca0cc951b394e5e5ec59460e85,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{haproxy},},Reason:Started,Message:Started container haproxy,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:08.130006308 +0000 UTC m=+3.324911025,LastTimestamp:2026-04-23 17:52:08.130006308 +0000 UTC m=+3.324911025,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:08.303250 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:08.303178 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:08.338800 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:08.338755 2572 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="3.2s" Apr 23 17:52:08.422061 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:08.422030 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-133-178.ec2.internal" event={"ID":"422f53ca0cc951b394e5e5ec59460e85","Type":"ContainerStarted","Data":"44a2eb6ca8f47a21ae4b2ee3f29db61a50caa81b20f589a570fec20aaf5fa8b1"} Apr 23 17:52:08.422190 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:08.422092 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:08.423336 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:08.423316 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:08.423465 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:08.423344 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:08.423465 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:08.423354 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:08.423550 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:08.423523 2572 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-133-178.ec2.internal\" not found" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:08.549353 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:08.549320 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:08.554541 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:08.554486 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:08.554541 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:08.554518 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:08.554541 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:08.554528 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:08.554703 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:08.554557 2572 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:08.572936 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:08.572752 2572 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:08.595595 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:08.595523 2572 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd32b02fbf6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal,UID:2e27a1d033408744b4b8c34c52f01b43,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:08.585247734 +0000 UTC m=+3.780152452,LastTimestamp:2026-04-23 17:52:08.585247734 +0000 UTC m=+3.780152452,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:08.602083 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:08.602018 2572 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd32b6ea360 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal,UID:2e27a1d033408744b4b8c34c52f01b43,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:08.592302944 +0000 UTC m=+3.787207667,LastTimestamp:2026-04-23 17:52:08.592302944 +0000 UTC m=+3.787207667,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:08.946622 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:08.946543 2572 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 17:52:09.183248 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:09.183220 2572 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:52:09.302183 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:09.302130 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:09.383791 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:09.383758 2572 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:52:09.424572 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:09.424542 2572 generic.go:358] "Generic (PLEG): container finished" podID="2e27a1d033408744b4b8c34c52f01b43" containerID="d93ed79bb61c27de06c59d946791a4142b773f26a6a37383a7a3380b1959ebc8" exitCode=0 Apr 23 17:52:09.424719 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:09.424611 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:09.424719 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:09.424625 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal" event={"ID":"2e27a1d033408744b4b8c34c52f01b43","Type":"ContainerDied","Data":"d93ed79bb61c27de06c59d946791a4142b773f26a6a37383a7a3380b1959ebc8"} Apr 23 17:52:09.424719 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:09.424642 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:09.425475 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:09.425455 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:09.425552 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:09.425484 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:09.425552 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:09.425456 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:09.425552 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:09.425500 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:09.425552 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:09.425523 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:09.425552 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:09.425537 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:09.426679 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:09.426663 2572 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-133-178.ec2.internal\" not found" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:09.426785 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:09.426772 2572 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-133-178.ec2.internal\" not found" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:09.438454 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:09.438372 2572 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd35d48b226 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal,UID:2e27a1d033408744b4b8c34c52f01b43,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\" already present on machine,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:09.428677158 +0000 UTC m=+4.623581862,LastTimestamp:2026-04-23 17:52:09.428677158 +0000 UTC m=+4.623581862,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:09.535897 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:09.535817 2572 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd3633914ae openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal,UID:2e27a1d033408744b4b8c34c52f01b43,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:09.528317102 +0000 UTC m=+4.723221823,LastTimestamp:2026-04-23 17:52:09.528317102 +0000 UTC m=+4.723221823,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:09.544158 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:09.544084 2572 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd363ab6b7d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal,UID:2e27a1d033408744b4b8c34c52f01b43,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:09.535810429 +0000 UTC m=+4.730723246,LastTimestamp:2026-04-23 17:52:09.535810429 +0000 UTC m=+4.730723246,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:09.819453 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:09.819364 2572 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:52:10.302410 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:10.302336 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:10.426990 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:10.426962 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_2e27a1d033408744b4b8c34c52f01b43/kube-rbac-proxy-crio/0.log" Apr 23 17:52:10.427392 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:10.427305 2572 generic.go:358] "Generic (PLEG): container finished" podID="2e27a1d033408744b4b8c34c52f01b43" containerID="ebb580e91532537d002aabbe92afb0fbd588258871100b2f80fa7438e9eef40c" exitCode=1 Apr 23 17:52:10.427392 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:10.427340 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal" event={"ID":"2e27a1d033408744b4b8c34c52f01b43","Type":"ContainerDied","Data":"ebb580e91532537d002aabbe92afb0fbd588258871100b2f80fa7438e9eef40c"} Apr 23 17:52:10.427392 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:10.427411 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:10.428341 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:10.428328 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:10.428414 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:10.428353 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:10.428414 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:10.428363 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:10.428603 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:10.428591 2572 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-133-178.ec2.internal\" not found" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:10.428645 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:10.428636 2572 scope.go:117] "RemoveContainer" containerID="ebb580e91532537d002aabbe92afb0fbd588258871100b2f80fa7438e9eef40c" Apr 23 17:52:10.439122 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:10.439046 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd35d48b226\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd35d48b226 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal,UID:2e27a1d033408744b4b8c34c52f01b43,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\" already present on machine,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:09.428677158 +0000 UTC m=+4.623581862,LastTimestamp:2026-04-23 17:52:10.431394494 +0000 UTC m=+5.626299219,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:10.535543 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:10.535465 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd3633914ae\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd3633914ae openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal,UID:2e27a1d033408744b4b8c34c52f01b43,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:09.528317102 +0000 UTC m=+4.723221823,LastTimestamp:2026-04-23 17:52:10.525815858 +0000 UTC m=+5.720720566,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:10.543203 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:10.543111 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd363ab6b7d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd363ab6b7d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal,UID:2e27a1d033408744b4b8c34c52f01b43,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:09.535810429 +0000 UTC m=+4.730723246,LastTimestamp:2026-04-23 17:52:10.533270248 +0000 UTC m=+5.728174968,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:11.302091 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:11.302059 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:11.430500 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:11.430473 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_2e27a1d033408744b4b8c34c52f01b43/kube-rbac-proxy-crio/1.log" Apr 23 17:52:11.430850 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:11.430797 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_2e27a1d033408744b4b8c34c52f01b43/kube-rbac-proxy-crio/0.log" Apr 23 17:52:11.431088 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:11.431065 2572 generic.go:358] "Generic (PLEG): container finished" podID="2e27a1d033408744b4b8c34c52f01b43" containerID="3e7c88ef334b0dc070ac573ca57c72e0be7b361f6d310330db136c2542fac221" exitCode=1 Apr 23 17:52:11.431143 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:11.431105 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal" event={"ID":"2e27a1d033408744b4b8c34c52f01b43","Type":"ContainerDied","Data":"3e7c88ef334b0dc070ac573ca57c72e0be7b361f6d310330db136c2542fac221"} Apr 23 17:52:11.431143 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:11.431136 2572 scope.go:117] "RemoveContainer" containerID="ebb580e91532537d002aabbe92afb0fbd588258871100b2f80fa7438e9eef40c" Apr 23 17:52:11.431211 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:11.431154 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:11.431974 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:11.431957 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:11.432069 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:11.431987 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:11.432069 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:11.431997 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:11.432339 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:11.432258 2572 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-133-178.ec2.internal\" not found" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:11.432339 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:11.432308 2572 scope.go:117] "RemoveContainer" containerID="3e7c88ef334b0dc070ac573ca57c72e0be7b361f6d310330db136c2542fac221" Apr 23 17:52:11.432481 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:11.432465 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_openshift-machine-config-operator(2e27a1d033408744b4b8c34c52f01b43)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal" podUID="2e27a1d033408744b4b8c34c52f01b43" Apr 23 17:52:11.439312 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:11.439246 2572 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd3d4b772a2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal,UID:2e27a1d033408744b4b8c34c52f01b43,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_openshift-machine-config-operator(2e27a1d033408744b4b8c34c52f01b43),Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:11.432424098 +0000 UTC m=+6.627328818,LastTimestamp:2026-04-23 17:52:11.432424098 +0000 UTC m=+6.627328818,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:11.547452 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:11.547419 2572 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="6.4s" Apr 23 17:52:11.773824 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:11.773730 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:11.774777 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:11.774755 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:11.774882 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:11.774794 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:11.774882 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:11.774807 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:11.774882 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:11.774846 2572 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:11.793045 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:11.793017 2572 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:12.300852 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:12.300825 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:12.434162 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:12.434138 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_2e27a1d033408744b4b8c34c52f01b43/kube-rbac-proxy-crio/1.log" Apr 23 17:52:12.434601 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:12.434587 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:12.435549 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:12.435531 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:12.435642 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:12.435564 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:12.435642 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:12.435577 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:12.435825 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:12.435810 2572 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-133-178.ec2.internal\" not found" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:12.435886 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:12.435867 2572 scope.go:117] "RemoveContainer" containerID="3e7c88ef334b0dc070ac573ca57c72e0be7b361f6d310330db136c2542fac221" Apr 23 17:52:12.436035 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:12.436018 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_openshift-machine-config-operator(2e27a1d033408744b4b8c34c52f01b43)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal" podUID="2e27a1d033408744b4b8c34c52f01b43" Apr 23 17:52:12.444851 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:12.444760 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd3d4b772a2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd3d4b772a2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal,UID:2e27a1d033408744b4b8c34c52f01b43,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_openshift-machine-config-operator(2e27a1d033408744b4b8c34c52f01b43),Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:11.432424098 +0000 UTC m=+6.627328818,LastTimestamp:2026-04-23 17:52:12.435987949 +0000 UTC m=+7.630892653,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:13.058048 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:13.058015 2572 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:52:13.187163 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:13.187133 2572 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:52:13.303190 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:13.303155 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:14.303747 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:14.303716 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:14.566294 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:14.566217 2572 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:52:14.688359 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:14.688325 2572 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 17:52:15.301899 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:15.301866 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:15.380989 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:15.380949 2572 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-133-178.ec2.internal\" not found" Apr 23 17:52:16.302566 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:16.302534 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:17.300729 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:17.300700 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:17.957021 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:17.956987 2572 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:52:18.193781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:18.193737 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:18.194989 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:18.194966 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:18.195045 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:18.194999 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:18.195045 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:18.195012 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:18.195045 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:18.195040 2572 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:18.211861 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:18.211801 2572 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:18.300693 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:18.300665 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:19.303209 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:19.303175 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:20.300812 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:20.300781 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:21.302364 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:21.302332 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:22.304282 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:22.304248 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:23.302607 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:23.302575 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:23.394685 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:23.394658 2572 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:52:23.403864 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:23.403842 2572 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:52:24.301441 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:24.301412 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:24.966717 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:24.966685 2572 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:52:25.212489 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:25.212447 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:25.213476 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:25.213450 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:25.213476 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:25.213482 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:25.213638 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:25.213492 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:25.213638 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:25.213522 2572 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:25.230027 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:25.229972 2572 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:25.303086 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:25.303056 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:25.381696 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:25.381650 2572 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-133-178.ec2.internal\" not found" Apr 23 17:52:25.871281 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:25.871251 2572 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 17:52:25.883359 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:25.883331 2572 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:52:26.302782 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:26.302758 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:26.414168 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:26.414129 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:26.415113 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:26.415094 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:26.415207 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:26.415123 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:26.415207 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:26.415145 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:26.415378 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:26.415365 2572 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-133-178.ec2.internal\" not found" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:26.415441 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:26.415431 2572 scope.go:117] "RemoveContainer" containerID="3e7c88ef334b0dc070ac573ca57c72e0be7b361f6d310330db136c2542fac221" Apr 23 17:52:26.425891 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:26.425816 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd35d48b226\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd35d48b226 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal,UID:2e27a1d033408744b4b8c34c52f01b43,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\" already present on machine,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:09.428677158 +0000 UTC m=+4.623581862,LastTimestamp:2026-04-23 17:52:26.417293558 +0000 UTC m=+21.612198285,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:26.521933 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:26.521848 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd3633914ae\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd3633914ae openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal,UID:2e27a1d033408744b4b8c34c52f01b43,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:09.528317102 +0000 UTC m=+4.723221823,LastTimestamp:2026-04-23 17:52:26.512156872 +0000 UTC m=+21.707061589,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:26.529100 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:26.529032 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd363ab6b7d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd363ab6b7d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal,UID:2e27a1d033408744b4b8c34c52f01b43,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:09.535810429 +0000 UTC m=+4.730723246,LastTimestamp:2026-04-23 17:52:26.520674418 +0000 UTC m=+21.715579142,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:27.303586 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:27.303557 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:27.459337 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:27.459309 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_2e27a1d033408744b4b8c34c52f01b43/kube-rbac-proxy-crio/2.log" Apr 23 17:52:27.459643 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:27.459629 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_2e27a1d033408744b4b8c34c52f01b43/kube-rbac-proxy-crio/1.log" Apr 23 17:52:27.459930 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:27.459907 2572 generic.go:358] "Generic (PLEG): container finished" podID="2e27a1d033408744b4b8c34c52f01b43" containerID="7022ca688fd98e8a92c18b2e0fd4c3d2df00188ed0cc5c847db896f947de0586" exitCode=1 Apr 23 17:52:27.459970 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:27.459943 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal" event={"ID":"2e27a1d033408744b4b8c34c52f01b43","Type":"ContainerDied","Data":"7022ca688fd98e8a92c18b2e0fd4c3d2df00188ed0cc5c847db896f947de0586"} Apr 23 17:52:27.460007 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:27.459969 2572 scope.go:117] "RemoveContainer" containerID="3e7c88ef334b0dc070ac573ca57c72e0be7b361f6d310330db136c2542fac221" Apr 23 17:52:27.460103 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:27.460089 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:27.461157 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:27.460970 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:27.461157 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:27.460999 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:27.461157 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:27.461011 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:27.461770 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:27.461736 2572 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-133-178.ec2.internal\" not found" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:27.461847 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:27.461811 2572 scope.go:117] "RemoveContainer" containerID="7022ca688fd98e8a92c18b2e0fd4c3d2df00188ed0cc5c847db896f947de0586" Apr 23 17:52:27.463491 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:27.462385 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_openshift-machine-config-operator(2e27a1d033408744b4b8c34c52f01b43)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal" podUID="2e27a1d033408744b4b8c34c52f01b43" Apr 23 17:52:27.472349 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:27.472273 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd3d4b772a2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd3d4b772a2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal,UID:2e27a1d033408744b4b8c34c52f01b43,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_openshift-machine-config-operator(2e27a1d033408744b4b8c34c52f01b43),Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:11.432424098 +0000 UTC m=+6.627328818,LastTimestamp:2026-04-23 17:52:27.462341347 +0000 UTC m=+22.657246065,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:28.302690 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:28.302662 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:28.462415 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:28.462385 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_2e27a1d033408744b4b8c34c52f01b43/kube-rbac-proxy-crio/2.log" Apr 23 17:52:29.300762 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:29.300728 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:30.304213 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:30.304185 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:31.301681 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:31.301650 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:31.976836 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:31.976813 2572 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:52:32.230095 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:32.230013 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:32.231043 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:32.231024 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:32.231150 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:32.231056 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:32.231150 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:32.231070 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:32.231150 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:32.231100 2572 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:32.247924 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:32.247903 2572 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:32.301875 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:32.301854 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:33.303161 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:33.303135 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:34.300092 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:34.300064 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:35.302643 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:35.302609 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:35.382612 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:35.382572 2572 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-133-178.ec2.internal\" not found" Apr 23 17:52:36.303198 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:36.303169 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:37.302653 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:37.302623 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:38.302480 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:38.302450 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:38.987771 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:38.987743 2572 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:52:39.248856 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:39.248780 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:39.249732 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:39.249716 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:39.249791 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:39.249748 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:39.249791 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:39.249764 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:39.249852 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:39.249797 2572 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:39.266001 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:39.265975 2572 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:39.300586 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:39.300563 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:39.586906 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:39.586824 2572 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 17:52:40.300747 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:40.300719 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:40.413935 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:40.413901 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:40.415034 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:40.415016 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:40.415150 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:40.415044 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:40.415150 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:40.415054 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:40.415266 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:40.415252 2572 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-133-178.ec2.internal\" not found" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:40.415321 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:40.415301 2572 scope.go:117] "RemoveContainer" containerID="7022ca688fd98e8a92c18b2e0fd4c3d2df00188ed0cc5c847db896f947de0586" Apr 23 17:52:40.415468 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:40.415451 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_openshift-machine-config-operator(2e27a1d033408744b4b8c34c52f01b43)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal" podUID="2e27a1d033408744b4b8c34c52f01b43" Apr 23 17:52:40.424018 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:40.423943 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd3d4b772a2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd3d4b772a2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal,UID:2e27a1d033408744b4b8c34c52f01b43,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_openshift-machine-config-operator(2e27a1d033408744b4b8c34c52f01b43),Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:11.432424098 +0000 UTC m=+6.627328818,LastTimestamp:2026-04-23 17:52:40.415416757 +0000 UTC m=+35.610321476,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:40.549347 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:40.549312 2572 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:52:40.642728 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:40.642659 2572 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:52:41.300719 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:41.300687 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:42.302170 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:42.302137 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:43.301838 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:43.301801 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:44.304683 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:44.304653 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:45.301079 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:45.301040 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:45.383723 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:45.383685 2572 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-133-178.ec2.internal\" not found" Apr 23 17:52:45.997694 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:45.997662 2572 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:52:46.266667 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:46.266578 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:46.267691 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:46.267673 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:46.267842 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:46.267711 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:46.267842 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:46.267727 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:46.267842 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:46.267761 2572 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:46.282959 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:46.282928 2572 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:46.301011 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:46.300986 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:47.303841 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:47.303806 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:48.017686 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:48.017655 2572 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:52:48.304597 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:48.304527 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:49.301543 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:49.301509 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:50.302597 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:50.302565 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:51.300746 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:51.300720 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:52.303561 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:52.303525 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:53.005036 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:53.005000 2572 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:52:53.283694 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:53.283587 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:53.284690 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:53.284668 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:53.284806 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:53.284707 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:53.284806 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:53.284722 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:53.284806 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:53.284753 2572 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:53.301179 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:53.301153 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:53.301179 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:53.301164 2572 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:54.300510 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:54.300479 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:54.413685 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:54.413650 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:54.415199 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:54.415182 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:54.415307 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:54.415216 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:54.415307 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:54.415230 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:54.415522 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:54.415506 2572 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-133-178.ec2.internal\" not found" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:54.415592 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:54.415572 2572 scope.go:117] "RemoveContainer" containerID="7022ca688fd98e8a92c18b2e0fd4c3d2df00188ed0cc5c847db896f947de0586" Apr 23 17:52:54.425772 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:54.425685 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd35d48b226\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd35d48b226 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal,UID:2e27a1d033408744b4b8c34c52f01b43,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\" already present on machine,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:09.428677158 +0000 UTC m=+4.623581862,LastTimestamp:2026-04-23 17:52:54.41625465 +0000 UTC m=+49.611159374,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:54.515373 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:54.515293 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd3633914ae\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd3633914ae openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal,UID:2e27a1d033408744b4b8c34c52f01b43,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:09.528317102 +0000 UTC m=+4.723221823,LastTimestamp:2026-04-23 17:52:54.506300682 +0000 UTC m=+49.701205391,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:54.529451 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:54.529354 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd363ab6b7d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd363ab6b7d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal,UID:2e27a1d033408744b4b8c34c52f01b43,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:09.535810429 +0000 UTC m=+4.730723246,LastTimestamp:2026-04-23 17:52:54.520222949 +0000 UTC m=+49.715127666,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:55.301139 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:55.301106 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:55.383827 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:55.383796 2572 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-133-178.ec2.internal\" not found" Apr 23 17:52:55.499911 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:55.499883 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_2e27a1d033408744b4b8c34c52f01b43/kube-rbac-proxy-crio/3.log" Apr 23 17:52:55.500238 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:55.500222 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_2e27a1d033408744b4b8c34c52f01b43/kube-rbac-proxy-crio/2.log" Apr 23 17:52:55.500563 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:55.500545 2572 generic.go:358] "Generic (PLEG): container finished" podID="2e27a1d033408744b4b8c34c52f01b43" containerID="8c0cd6657c53c5be5328d2a6bc9b268ef9971a86b45482d16fee20dfce8e8faa" exitCode=1 Apr 23 17:52:55.500618 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:55.500578 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal" event={"ID":"2e27a1d033408744b4b8c34c52f01b43","Type":"ContainerDied","Data":"8c0cd6657c53c5be5328d2a6bc9b268ef9971a86b45482d16fee20dfce8e8faa"} Apr 23 17:52:55.500618 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:55.500603 2572 scope.go:117] "RemoveContainer" containerID="7022ca688fd98e8a92c18b2e0fd4c3d2df00188ed0cc5c847db896f947de0586" Apr 23 17:52:55.500717 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:55.500704 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:52:55.503139 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:55.503124 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:52:55.503217 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:55.503152 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:52:55.503217 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:55.503162 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:52:55.503375 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:55.503363 2572 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-133-178.ec2.internal\" not found" node="ip-10-0-133-178.ec2.internal" Apr 23 17:52:55.503438 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:55.503428 2572 scope.go:117] "RemoveContainer" containerID="8c0cd6657c53c5be5328d2a6bc9b268ef9971a86b45482d16fee20dfce8e8faa" Apr 23 17:52:55.503564 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:55.503549 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_openshift-machine-config-operator(2e27a1d033408744b4b8c34c52f01b43)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal" podUID="2e27a1d033408744b4b8c34c52f01b43" Apr 23 17:52:55.514090 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:52:55.514021 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd3d4b772a2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd3d4b772a2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal,UID:2e27a1d033408744b4b8c34c52f01b43,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_openshift-machine-config-operator(2e27a1d033408744b4b8c34c52f01b43),Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:11.432424098 +0000 UTC m=+6.627328818,LastTimestamp:2026-04-23 17:52:55.503522518 +0000 UTC m=+50.698427241,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:52:56.300272 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:56.300247 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:56.502949 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:56.502923 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_2e27a1d033408744b4b8c34c52f01b43/kube-rbac-proxy-crio/3.log" Apr 23 17:52:57.302726 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:57.302698 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:58.303346 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:58.303320 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:52:59.302335 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:52:59.302307 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:00.013633 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:00.013602 2572 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:00.301705 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:00.301643 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:00.302495 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:00.302478 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:00.302570 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:00.302508 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:00.302570 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:00.302523 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:00.302570 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:00.302551 2572 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-133-178.ec2.internal" Apr 23 17:53:00.302986 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:00.302970 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:00.318015 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:00.317987 2572 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-133-178.ec2.internal" Apr 23 17:53:01.301729 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:01.301700 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:02.302299 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:02.302271 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:03.301997 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:03.301966 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:04.302619 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:04.302586 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:05.302518 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:05.302489 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:05.384529 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:05.384489 2572 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-133-178.ec2.internal\" not found" Apr 23 17:53:06.303417 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:06.303383 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:07.023148 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:07.023112 2572 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:07.303529 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:07.303453 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:07.318911 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:07.318886 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:07.322172 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:07.322150 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:07.322284 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:07.322186 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:07.322284 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:07.322200 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:07.322284 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:07.322234 2572 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-133-178.ec2.internal" Apr 23 17:53:07.343571 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:07.343545 2572 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-133-178.ec2.internal" Apr 23 17:53:08.301367 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:08.301331 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:08.414306 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:08.414265 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:08.415334 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:08.415314 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:08.415383 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:08.415347 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:08.415383 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:08.415357 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:08.415590 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:08.415578 2572 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-133-178.ec2.internal\" not found" node="ip-10-0-133-178.ec2.internal" Apr 23 17:53:08.415635 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:08.415626 2572 scope.go:117] "RemoveContainer" containerID="8c0cd6657c53c5be5328d2a6bc9b268ef9971a86b45482d16fee20dfce8e8faa" Apr 23 17:53:08.415758 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:08.415744 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_openshift-machine-config-operator(2e27a1d033408744b4b8c34c52f01b43)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal" podUID="2e27a1d033408744b4b8c34c52f01b43" Apr 23 17:53:08.426384 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:08.426311 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd3d4b772a2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd3d4b772a2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal,UID:2e27a1d033408744b4b8c34c52f01b43,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_openshift-machine-config-operator(2e27a1d033408744b4b8c34c52f01b43),Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:11.432424098 +0000 UTC m=+6.627328818,LastTimestamp:2026-04-23 17:53:08.415717328 +0000 UTC m=+63.610622049,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:53:09.301051 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:09.301020 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:10.302060 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:10.302026 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:11.301544 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:11.301513 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:12.304271 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:12.304238 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:12.874877 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:12.874838 2572 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:53:13.303893 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:13.303868 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:13.608571 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:13.608494 2572 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:53:14.032689 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:14.032659 2572 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:14.300857 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:14.300795 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:14.344487 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:14.344453 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:14.345499 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:14.345481 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:14.345608 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:14.345515 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:14.345608 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:14.345530 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:14.345608 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:14.345563 2572 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-133-178.ec2.internal" Apr 23 17:53:14.360027 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:14.360000 2572 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-133-178.ec2.internal" Apr 23 17:53:15.301877 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:15.301850 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:15.384681 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:15.384614 2572 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-133-178.ec2.internal\" not found" Apr 23 17:53:16.302111 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:16.302083 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:17.302674 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:17.302644 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:18.301728 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:18.301693 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:19.302819 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:19.302789 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:19.414172 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:19.414129 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:19.414996 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:19.414981 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:19.415092 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:19.415011 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:19.415092 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:19.415024 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:19.415304 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:19.415289 2572 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-133-178.ec2.internal\" not found" node="ip-10-0-133-178.ec2.internal" Apr 23 17:53:19.415365 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:19.415354 2572 scope.go:117] "RemoveContainer" containerID="8c0cd6657c53c5be5328d2a6bc9b268ef9971a86b45482d16fee20dfce8e8faa" Apr 23 17:53:19.415525 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:19.415508 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_openshift-machine-config-operator(2e27a1d033408744b4b8c34c52f01b43)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal" podUID="2e27a1d033408744b4b8c34c52f01b43" Apr 23 17:53:19.421772 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:19.421696 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd3d4b772a2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd3d4b772a2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal,UID:2e27a1d033408744b4b8c34c52f01b43,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_openshift-machine-config-operator(2e27a1d033408744b4b8c34c52f01b43),Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:11.432424098 +0000 UTC m=+6.627328818,LastTimestamp:2026-04-23 17:53:19.415473841 +0000 UTC m=+74.610378559,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:53:20.302683 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:20.302650 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:20.812599 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:20.812565 2572 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:53:21.041541 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:21.041505 2572 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:21.303935 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:21.303905 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:21.360753 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:21.360716 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:21.361618 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:21.361600 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:21.361728 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:21.361628 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:21.361728 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:21.361637 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:21.361728 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:21.361662 2572 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-133-178.ec2.internal" Apr 23 17:53:21.376385 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:21.376363 2572 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-133-178.ec2.internal" Apr 23 17:53:22.304109 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:22.304076 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:22.554846 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:22.554765 2572 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 17:53:23.302373 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:23.302341 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:24.302202 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:24.302173 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:25.301809 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:25.301778 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:25.385771 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:25.385737 2572 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-133-178.ec2.internal\" not found" Apr 23 17:53:26.301621 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:26.301591 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:27.303883 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:27.303855 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:28.049300 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:28.049264 2572 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:28.302957 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:28.302883 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:28.376648 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:28.376618 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:28.377630 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:28.377611 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:28.377750 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:28.377646 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:28.377750 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:28.377660 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:28.377750 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:28.377695 2572 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-133-178.ec2.internal" Apr 23 17:53:28.393538 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:28.393510 2572 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-133-178.ec2.internal" Apr 23 17:53:29.301105 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:29.301075 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:30.302280 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:30.302254 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:31.300588 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:31.300563 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:32.302912 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:32.302884 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:33.300353 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:33.300325 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:34.302660 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:34.302627 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:34.413929 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:34.413891 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:34.414872 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:34.414853 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:34.414971 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:34.414885 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:34.414971 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:34.414899 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:34.415186 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:34.415171 2572 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-133-178.ec2.internal\" not found" node="ip-10-0-133-178.ec2.internal" Apr 23 17:53:34.415232 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:34.415223 2572 scope.go:117] "RemoveContainer" containerID="8c0cd6657c53c5be5328d2a6bc9b268ef9971a86b45482d16fee20dfce8e8faa" Apr 23 17:53:34.415365 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:34.415349 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_openshift-machine-config-operator(2e27a1d033408744b4b8c34c52f01b43)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal" podUID="2e27a1d033408744b4b8c34c52f01b43" Apr 23 17:53:34.423044 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:34.422970 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd3d4b772a2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd3d4b772a2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal,UID:2e27a1d033408744b4b8c34c52f01b43,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_openshift-machine-config-operator(2e27a1d033408744b4b8c34c52f01b43),Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:11.432424098 +0000 UTC m=+6.627328818,LastTimestamp:2026-04-23 17:53:34.415316683 +0000 UTC m=+89.610221401,Count:8,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:53:35.058684 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:35.058651 2572 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:35.302391 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:35.302359 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:35.386163 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:35.386093 2572 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-133-178.ec2.internal\" not found" Apr 23 17:53:35.394085 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:35.394057 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:35.394926 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:35.394911 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:35.394990 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:35.394943 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:35.394990 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:35.394955 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:35.394990 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:35.394980 2572 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-133-178.ec2.internal" Apr 23 17:53:35.411049 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:35.411018 2572 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-133-178.ec2.internal" Apr 23 17:53:35.414226 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:35.414211 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:35.414996 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:35.414979 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:35.415094 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:35.415009 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:35.415094 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:35.415022 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:35.415256 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:35.415242 2572 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-133-178.ec2.internal\" not found" node="ip-10-0-133-178.ec2.internal" Apr 23 17:53:36.302555 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:36.302529 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:37.302367 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:37.302342 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:38.303014 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:38.302988 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:39.301395 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:39.301362 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:40.303107 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:40.303077 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:41.300930 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:41.300889 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:42.068337 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:42.068302 2572 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:42.301861 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:42.301828 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:42.412040 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:42.411977 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:42.412926 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:42.412901 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:42.413031 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:42.412945 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:42.413031 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:42.412960 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:42.413031 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:42.412995 2572 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-133-178.ec2.internal" Apr 23 17:53:42.428500 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:42.428476 2572 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-133-178.ec2.internal" Apr 23 17:53:43.302356 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:43.302323 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:44.300396 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:44.300370 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:45.304462 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:45.304436 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:45.386261 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:45.386234 2572 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-133-178.ec2.internal\" not found" Apr 23 17:53:46.302840 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:46.302803 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:47.304664 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:47.304637 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:48.301186 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:48.301154 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:49.080506 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:49.080470 2572 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:49.301387 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:49.301354 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:49.413933 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:49.413858 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:49.414875 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:49.414857 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:49.414978 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:49.414891 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:49.414978 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:49.414904 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:49.415180 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:49.415165 2572 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-133-178.ec2.internal\" not found" node="ip-10-0-133-178.ec2.internal" Apr 23 17:53:49.415241 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:49.415227 2572 scope.go:117] "RemoveContainer" containerID="8c0cd6657c53c5be5328d2a6bc9b268ef9971a86b45482d16fee20dfce8e8faa" Apr 23 17:53:49.426160 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:49.426073 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd35d48b226\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd35d48b226 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal,UID:2e27a1d033408744b4b8c34c52f01b43,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81\" already present on machine,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:09.428677158 +0000 UTC m=+4.623581862,LastTimestamp:2026-04-23 17:53:49.415916233 +0000 UTC m=+104.610820957,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:53:49.429163 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:49.429145 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:49.429965 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:49.429948 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:49.430049 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:49.429977 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:49.430049 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:49.429987 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:49.430049 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:49.430010 2572 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-133-178.ec2.internal" Apr 23 17:53:49.447060 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:49.447040 2572 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-133-178.ec2.internal" Apr 23 17:53:49.514809 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:49.514707 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd3633914ae\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd3633914ae openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal,UID:2e27a1d033408744b4b8c34c52f01b43,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:09.528317102 +0000 UTC m=+4.723221823,LastTimestamp:2026-04-23 17:53:49.506699858 +0000 UTC m=+104.701604589,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:53:49.531665 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:49.531587 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd363ab6b7d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd363ab6b7d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal,UID:2e27a1d033408744b4b8c34c52f01b43,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:09.535810429 +0000 UTC m=+4.730723246,LastTimestamp:2026-04-23 17:53:49.519721159 +0000 UTC m=+104.714625885,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:53:49.574226 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:49.574203 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_2e27a1d033408744b4b8c34c52f01b43/kube-rbac-proxy-crio/4.log" Apr 23 17:53:49.574572 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:49.574556 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_2e27a1d033408744b4b8c34c52f01b43/kube-rbac-proxy-crio/3.log" Apr 23 17:53:49.574835 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:49.574815 2572 generic.go:358] "Generic (PLEG): container finished" podID="2e27a1d033408744b4b8c34c52f01b43" containerID="be862fbb1a27d116f989acff7cdaaa399cb017863e17162d45233d74a9f7b96c" exitCode=1 Apr 23 17:53:49.574879 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:49.574849 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal" event={"ID":"2e27a1d033408744b4b8c34c52f01b43","Type":"ContainerDied","Data":"be862fbb1a27d116f989acff7cdaaa399cb017863e17162d45233d74a9f7b96c"} Apr 23 17:53:49.574912 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:49.574879 2572 scope.go:117] "RemoveContainer" containerID="8c0cd6657c53c5be5328d2a6bc9b268ef9971a86b45482d16fee20dfce8e8faa" Apr 23 17:53:49.574982 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:49.574971 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:49.575787 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:49.575679 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:49.575787 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:49.575712 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:49.575787 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:49.575722 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:49.575932 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:49.575924 2572 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-133-178.ec2.internal\" not found" node="ip-10-0-133-178.ec2.internal" Apr 23 17:53:49.575976 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:49.575965 2572 scope.go:117] "RemoveContainer" containerID="be862fbb1a27d116f989acff7cdaaa399cb017863e17162d45233d74a9f7b96c" Apr 23 17:53:49.576110 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:49.576095 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_openshift-machine-config-operator(2e27a1d033408744b4b8c34c52f01b43)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal" podUID="2e27a1d033408744b4b8c34c52f01b43" Apr 23 17:53:49.583550 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:49.583482 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd3d4b772a2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd3d4b772a2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal,UID:2e27a1d033408744b4b8c34c52f01b43,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_openshift-machine-config-operator(2e27a1d033408744b4b8c34c52f01b43),Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:11.432424098 +0000 UTC m=+6.627328818,LastTimestamp:2026-04-23 17:53:49.576066962 +0000 UTC m=+104.770971691,Count:9,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:53:50.304118 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:50.304093 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:50.577833 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:50.577762 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_2e27a1d033408744b4b8c34c52f01b43/kube-rbac-proxy-crio/4.log" Apr 23 17:53:51.301097 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:51.301071 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:52.303647 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:52.303622 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:53.307907 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:53.307883 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:54.302073 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:54.302048 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:55.301341 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:55.301307 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:55.387102 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:55.387064 2572 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-133-178.ec2.internal\" not found" Apr 23 17:53:56.102303 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:56.102273 2572 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:53:56.313257 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:56.313234 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:56.447523 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:56.447469 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:53:56.448389 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:56.448369 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:53:56.448495 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:56.448423 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:53:56.448495 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:56.448440 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:53:56.448495 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:56.448473 2572 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-133-178.ec2.internal" Apr 23 17:53:56.468266 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:53:56.468241 2572 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-133-178.ec2.internal" Apr 23 17:53:57.303901 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:57.303872 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:58.301795 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:58.301762 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:53:59.303423 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:53:59.303374 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:00.303112 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:00.303083 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:01.302643 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:01.302612 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:02.302167 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:02.302136 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:03.113277 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:03.113246 2572 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 23 17:54:03.302364 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:03.302331 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:03.414686 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:03.414604 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:54:03.415660 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:03.415641 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:54:03.415758 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:03.415677 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:54:03.415758 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:03.415691 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:54:03.415947 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:03.415932 2572 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-133-178.ec2.internal\" not found" node="ip-10-0-133-178.ec2.internal" Apr 23 17:54:03.416008 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:03.415994 2572 scope.go:117] "RemoveContainer" containerID="be862fbb1a27d116f989acff7cdaaa399cb017863e17162d45233d74a9f7b96c" Apr 23 17:54:03.416155 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:03.416137 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_openshift-machine-config-operator(2e27a1d033408744b4b8c34c52f01b43)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal" podUID="2e27a1d033408744b4b8c34c52f01b43" Apr 23 17:54:03.425911 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:03.425826 2572 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd3d4b772a2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal.18a90dd3d4b772a2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal,UID:2e27a1d033408744b4b8c34c52f01b43,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_openshift-machine-config-operator(2e27a1d033408744b4b8c34c52f01b43),Source:EventSource{Component:kubelet,Host:ip-10-0-133-178.ec2.internal,},FirstTimestamp:2026-04-23 17:52:11.432424098 +0000 UTC m=+6.627328818,LastTimestamp:2026-04-23 17:54:03.416105293 +0000 UTC m=+118.611010012,Count:10,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-133-178.ec2.internal,}" Apr 23 17:54:03.469146 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:03.469120 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:54:03.470146 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:03.470129 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:54:03.470234 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:03.470161 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:54:03.470234 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:03.470174 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:54:03.470234 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:03.470210 2572 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-133-178.ec2.internal" Apr 23 17:54:03.488293 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:03.488269 2572 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-133-178.ec2.internal" Apr 23 17:54:04.302017 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:04.301983 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:05.304632 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:05.304600 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:05.387413 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:05.387383 2572 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-133-178.ec2.internal\" not found" Apr 23 17:54:06.305682 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:06.305656 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:06.462427 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:06.462384 2572 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-133-178.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:54:07.307766 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:07.307740 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-133-178.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:54:08.135522 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:08.135489 2572 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-8frxh" Apr 23 17:54:08.204929 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:08.204892 2572 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Apr 23 17:54:08.321818 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:08.321790 2572 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-133-178.ec2.internal" not found Apr 23 17:54:08.346909 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:08.346888 2572 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-133-178.ec2.internal" not found Apr 23 17:54:08.417602 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:08.417538 2572 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-133-178.ec2.internal" not found Apr 23 17:54:08.698574 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:08.698501 2572 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-133-178.ec2.internal" not found Apr 23 17:54:08.698574 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:08.698530 2572 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-10-0-133-178.ec2.internal" not found Apr 23 17:54:08.737726 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:08.737705 2572 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-133-178.ec2.internal" not found Apr 23 17:54:08.756281 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:08.756257 2572 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-133-178.ec2.internal" not found Apr 23 17:54:08.813175 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:08.813153 2572 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-133-178.ec2.internal" not found Apr 23 17:54:09.092190 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:09.092166 2572 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-133-178.ec2.internal" not found Apr 23 17:54:09.092190 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:09.092189 2572 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-10-0-133-178.ec2.internal" not found Apr 23 17:54:09.136750 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:09.136714 2572 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2028-04-22 17:49:08 +0000 UTC" deadline="2027-11-25 12:54:25.297080622 +0000 UTC" Apr 23 17:54:09.136750 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:09.136743 2572 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="13939h0m16.160340231s" Apr 23 17:54:09.339774 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:09.339747 2572 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-133-178.ec2.internal" not found Apr 23 17:54:09.361628 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:09.361576 2572 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-133-178.ec2.internal" not found Apr 23 17:54:09.422805 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:09.422783 2572 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-133-178.ec2.internal" not found Apr 23 17:54:09.666302 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:09.666239 2572 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:54:09.702393 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:09.702373 2572 nodeinfomanager.go:417] Failed to publish CSINode: nodes "ip-10-0-133-178.ec2.internal" not found Apr 23 17:54:09.702393 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:09.702394 2572 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-10-0-133-178.ec2.internal" not found Apr 23 17:54:10.120082 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:10.120053 2572 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-10-0-133-178.ec2.internal\" not found" node="ip-10-0-133-178.ec2.internal" Apr 23 17:54:10.489376 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:10.489346 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:54:10.491569 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:10.491554 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:54:10.491644 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:10.491585 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:54:10.491644 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:10.491595 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:54:10.491644 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:10.491622 2572 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-133-178.ec2.internal" Apr 23 17:54:10.501870 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:10.501856 2572 kubelet_node_status.go:81] "Successfully registered node" node="ip-10-0-133-178.ec2.internal" Apr 23 17:54:10.501922 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:10.501879 2572 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-133-178.ec2.internal\": node \"ip-10-0-133-178.ec2.internal\" not found" Apr 23 17:54:10.524936 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:10.524900 2572 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-133-178.ec2.internal\" not found" Apr 23 17:54:10.625307 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:10.625275 2572 kubelet_node_status.go:509] "Node not becoming ready in time after startup" Apr 23 17:54:11.378803 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:11.378772 2572 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Apr 23 17:54:11.387529 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:11.387498 2572 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 23 17:54:11.424425 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:11.424387 2572 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-g2kvw" Apr 23 17:54:11.430532 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:11.430515 2572 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-g2kvw" Apr 23 17:54:12.431561 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:12.431510 2572 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-22 17:49:11 +0000 UTC" deadline="2027-10-11 04:40:59.619541098 +0000 UTC" Apr 23 17:54:12.431561 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:12.431550 2572 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="12850h46m47.187993685s" Apr 23 17:54:13.282994 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:13.282963 2572 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:54:13.431677 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:13.431622 2572 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-22 17:49:11 +0000 UTC" deadline="2027-11-14 05:18:50.051832218 +0000 UTC" Apr 23 17:54:13.431677 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:13.431675 2572 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="13667h24m36.620159842s" Apr 23 17:54:14.523415 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:14.523368 2572 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:54:15.388162 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:15.388114 2572 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-133-178.ec2.internal\" not found" Apr 23 17:54:15.403439 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:15.403388 2572 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:16.413877 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:16.413839 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:54:16.414770 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:16.414757 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:54:16.414848 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:16.414783 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:54:16.414848 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:16.414793 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:54:16.415033 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:16.415021 2572 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-133-178.ec2.internal\" not found" node="ip-10-0-133-178.ec2.internal" Apr 23 17:54:16.415078 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:16.415069 2572 scope.go:117] "RemoveContainer" containerID="be862fbb1a27d116f989acff7cdaaa399cb017863e17162d45233d74a9f7b96c" Apr 23 17:54:16.415198 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:16.415185 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_openshift-machine-config-operator(2e27a1d033408744b4b8c34c52f01b43)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal" podUID="2e27a1d033408744b4b8c34c52f01b43" Apr 23 17:54:20.403949 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:20.403918 2572 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:20.836778 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:20.836738 2572 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-133-178.ec2.internal\": node \"ip-10-0-133-178.ec2.internal\" not found" Apr 23 17:54:25.388276 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:25.388223 2572 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-133-178.ec2.internal\" not found" Apr 23 17:54:25.404544 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:25.404511 2572 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:30.405713 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:30.405678 2572 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:31.037416 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:31.037361 2572 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-133-178.ec2.internal\": node \"ip-10-0-133-178.ec2.internal\" not found" Apr 23 17:54:31.414520 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:31.414418 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:54:31.415352 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:31.415335 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:54:31.415471 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:31.415365 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:54:31.415471 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:31.415376 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:54:31.415618 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:31.415602 2572 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-133-178.ec2.internal\" not found" node="ip-10-0-133-178.ec2.internal" Apr 23 17:54:31.415679 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:31.415654 2572 scope.go:117] "RemoveContainer" containerID="be862fbb1a27d116f989acff7cdaaa399cb017863e17162d45233d74a9f7b96c" Apr 23 17:54:31.415785 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:31.415770 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_openshift-machine-config-operator(2e27a1d033408744b4b8c34c52f01b43)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal" podUID="2e27a1d033408744b4b8c34c52f01b43" Apr 23 17:54:35.388452 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:35.388384 2572 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-133-178.ec2.internal\" not found" Apr 23 17:54:35.406633 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:35.406596 2572 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:40.407702 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:40.407669 2572 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:41.111095 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:41.111056 2572 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-133-178.ec2.internal\": node \"ip-10-0-133-178.ec2.internal\" not found" Apr 23 17:54:42.351936 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.351907 2572 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:54:42.374326 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.374310 2572 apiserver.go:52] "Watching apiserver" Apr 23 17:54:42.380086 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.380069 2572 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Apr 23 17:54:42.380446 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.380426 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9hqf9","openshift-dns/node-resolver-hxdgk","openshift-network-diagnostics/network-check-target-9wq98","openshift-network-operator/iptables-alerter-4wmwg","kube-system/konnectivity-agent-cwppv","openshift-cluster-node-tuning-operator/tuned-9455k","openshift-image-registry/node-ca-h9g78","openshift-multus/multus-6brjb","openshift-multus/multus-additional-cni-plugins-f9ndr","openshift-multus/network-metrics-daemon-lm6wc","openshift-ovn-kubernetes/ovnkube-node-v9pcc","kube-system/global-pull-secret-syncer-n95c8"] Apr 23 17:54:42.385195 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.385180 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9hqf9" Apr 23 17:54:42.387154 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.387134 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-metrics-serving-cert\"" Apr 23 17:54:42.387154 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.387145 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-sa-dockercfg-2224b\"" Apr 23 17:54:42.387334 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.387159 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"openshift-service-ca.crt\"" Apr 23 17:54:42.387334 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.387201 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"kube-root-ca.crt\"" Apr 23 17:54:42.387462 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.387353 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hxdgk" Apr 23 17:54:42.389049 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.389031 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Apr 23 17:54:42.389149 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.389033 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Apr 23 17:54:42.389342 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.389328 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-vlpfs\"" Apr 23 17:54:42.389484 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.389470 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:54:42.389561 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:42.389544 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9wq98" podUID="4b5c0501-ab5e-4cac-9c9f-f306624ec47f" Apr 23 17:54:42.392003 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.391974 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-cwppv" Apr 23 17:54:42.394350 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.394333 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"konnectivity-agent\"" Apr 23 17:54:42.394451 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.394381 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kube-system\"/\"konnectivity-ca-bundle\"" Apr 23 17:54:42.394451 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.394388 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"default-dockercfg-wkbd6\"" Apr 23 17:54:42.394746 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.394732 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4wmwg" Apr 23 17:54:42.394835 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.394820 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.396375 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.396357 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-dockercfg-rkhpr\"" Apr 23 17:54:42.396483 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.396357 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Apr 23 17:54:42.396605 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.396593 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Apr 23 17:54:42.396663 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.396620 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"openshift-service-ca.crt\"" Apr 23 17:54:42.396834 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.396820 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Apr 23 17:54:42.396997 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.396982 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-h9g78" Apr 23 17:54:42.397066 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.396988 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"kube-root-ca.crt\"" Apr 23 17:54:42.397382 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.397364 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"tuned-dockercfg-vs95g\"" Apr 23 17:54:42.398591 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.398569 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Apr 23 17:54:42.398591 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.398589 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Apr 23 17:54:42.398796 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.398780 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Apr 23 17:54:42.398888 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.398872 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-4x9xz\"" Apr 23 17:54:42.399115 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.399099 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.400844 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.400825 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Apr 23 17:54:42.400844 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.400844 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Apr 23 17:54:42.401022 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.400920 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Apr 23 17:54:42.401022 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.400931 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-97zm5\"" Apr 23 17:54:42.401022 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.400924 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Apr 23 17:54:42.401334 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.401318 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-f9ndr" Apr 23 17:54:42.403254 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.403238 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Apr 23 17:54:42.403470 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.403456 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Apr 23 17:54:42.403554 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.403501 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-72dm2\"" Apr 23 17:54:42.403863 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.403846 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:54:42.403951 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:42.403931 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lm6wc" podUID="f7526a98-a284-45c2-aeb2-cce4ddcd8f45" Apr 23 17:54:42.406075 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.406059 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.407660 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.407645 2572 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal" Apr 23 17:54:42.408057 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.408039 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Apr 23 17:54:42.408164 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.408146 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Apr 23 17:54:42.408234 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.408166 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:54:42.408234 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:42.408214 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-n95c8" podUID="91e65909-6fc5-43ad-9403-4e762e15651f" Apr 23 17:54:42.408234 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.408113 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Apr 23 17:54:42.408380 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.408168 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Apr 23 17:54:42.408380 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.408262 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Apr 23 17:54:42.408380 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.408308 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Apr 23 17:54:42.408380 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.408370 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-9wk7z\"" Apr 23 17:54:42.410086 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.410070 2572 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 23 17:54:42.414232 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.414218 2572 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal" Apr 23 17:54:42.418080 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.418065 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal"] Apr 23 17:54:42.418308 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.418294 2572 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 23 17:54:42.418383 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.418370 2572 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-133-178.ec2.internal" Apr 23 17:54:42.423768 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.423754 2572 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 23 17:54:42.423840 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:42.423787 2572 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal" Apr 23 17:54:42.423840 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.423828 2572 scope.go:117] "RemoveContainer" containerID="be862fbb1a27d116f989acff7cdaaa399cb017863e17162d45233d74a9f7b96c" Apr 23 17:54:42.423969 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:42.423954 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_openshift-machine-config-operator(2e27a1d033408744b4b8c34c52f01b43)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal" podUID="2e27a1d033408744b4b8c34c52f01b43" Apr 23 17:54:42.429295 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.429279 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kube-system/kube-apiserver-proxy-ip-10-0-133-178.ec2.internal"] Apr 23 17:54:42.429509 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.429497 2572 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 23 17:54:42.507148 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507121 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cc1881ec-f1a3-4551-ac37-e01f270956dc-sys\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.507148 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507148 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc1881ec-f1a3-4551-ac37-e01f270956dc-lib-modules\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.507320 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507165 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-multus-cni-dir\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.507320 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507179 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-host-run-multus-certs\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.507320 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507194 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/91e65909-6fc5-43ad-9403-4e762e15651f-kubelet-config\") pod \"global-pull-secret-syncer-n95c8\" (UID: \"91e65909-6fc5-43ad-9403-4e762e15651f\") " pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:54:42.507320 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507209 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6-socket-dir\") pod \"aws-ebs-csi-driver-node-9hqf9\" (UID: \"c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9hqf9" Apr 23 17:54:42.507320 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507245 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8l9t\" (UniqueName: \"kubernetes.io/projected/c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6-kube-api-access-c8l9t\") pod \"aws-ebs-csi-driver-node-9hqf9\" (UID: \"c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9hqf9" Apr 23 17:54:42.507320 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507287 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc1881ec-f1a3-4551-ac37-e01f270956dc-etc-kubernetes\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.507320 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507314 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/cc1881ec-f1a3-4551-ac37-e01f270956dc-etc-systemd\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.507577 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507329 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-hostroot\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.507577 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507369 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-ovnkube-config\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.507577 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507388 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ae6f204b-0425-4e4c-8749-41bce4ec27bd-hosts-file\") pod \"node-resolver-hxdgk\" (UID: \"ae6f204b-0425-4e4c-8749-41bce4ec27bd\") " pod="openshift-dns/node-resolver-hxdgk" Apr 23 17:54:42.507577 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507432 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ae6f204b-0425-4e4c-8749-41bce4ec27bd-tmp-dir\") pod \"node-resolver-hxdgk\" (UID: \"ae6f204b-0425-4e4c-8749-41bce4ec27bd\") " pod="openshift-dns/node-resolver-hxdgk" Apr 23 17:54:42.507577 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507462 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/095aaf33-9f06-4dd6-ab66-144f189b570f-host-slash\") pod \"iptables-alerter-4wmwg\" (UID: \"095aaf33-9f06-4dd6-ab66-144f189b570f\") " pod="openshift-network-operator/iptables-alerter-4wmwg" Apr 23 17:54:42.507577 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507486 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/012f7036-9d2e-45a6-985c-701982b85f46-system-cni-dir\") pod \"multus-additional-cni-plugins-f9ndr\" (UID: \"012f7036-9d2e-45a6-985c-701982b85f46\") " pod="openshift-multus/multus-additional-cni-plugins-f9ndr" Apr 23 17:54:42.507577 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507504 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4rrl\" (UniqueName: \"kubernetes.io/projected/cc1881ec-f1a3-4551-ac37-e01f270956dc-kube-api-access-l4rrl\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.507577 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507519 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-host-run-netns\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.507577 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507533 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7526a98-a284-45c2-aeb2-cce4ddcd8f45-metrics-certs\") pod \"network-metrics-daemon-lm6wc\" (UID: \"f7526a98-a284-45c2-aeb2-cce4ddcd8f45\") " pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:54:42.507577 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507555 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-run-systemd\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.507881 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507581 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cc1881ec-f1a3-4551-ac37-e01f270956dc-run\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.507881 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507604 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c4c9afb7-fbe4-44de-b1b1-c6a1f86b72dd-host\") pod \"node-ca-h9g78\" (UID: \"c4c9afb7-fbe4-44de-b1b1-c6a1f86b72dd\") " pod="openshift-image-registry/node-ca-h9g78" Apr 23 17:54:42.507881 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507618 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/30e5a914-97b2-4c21-985a-db4f9913ea08-cni-binary-copy\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.507881 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507633 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-var-lib-openvswitch\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.507881 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507656 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-etc-openvswitch\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.507881 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507671 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-host-run-ovn-kubernetes\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.507881 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507685 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrfsk\" (UniqueName: \"kubernetes.io/projected/012f7036-9d2e-45a6-985c-701982b85f46-kube-api-access-mrfsk\") pod \"multus-additional-cni-plugins-f9ndr\" (UID: \"012f7036-9d2e-45a6-985c-701982b85f46\") " pod="openshift-multus/multus-additional-cni-plugins-f9ndr" Apr 23 17:54:42.507881 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507704 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/cc1881ec-f1a3-4551-ac37-e01f270956dc-etc-sysconfig\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.507881 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507726 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-host-slash\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.507881 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507745 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/cc1881ec-f1a3-4551-ac37-e01f270956dc-etc-sysctl-conf\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.507881 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507760 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-host-var-lib-cni-bin\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.507881 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507775 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/30e5a914-97b2-4c21-985a-db4f9913ea08-multus-daemon-config\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.507881 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507793 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-etc-kubernetes\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.507881 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507818 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/26df88a3-a37a-4023-9f9f-cce91875523b-agent-certs\") pod \"konnectivity-agent-cwppv\" (UID: \"26df88a3-a37a-4023-9f9f-cce91875523b\") " pod="kube-system/konnectivity-agent-cwppv" Apr 23 17:54:42.507881 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507842 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/012f7036-9d2e-45a6-985c-701982b85f46-cnibin\") pod \"multus-additional-cni-plugins-f9ndr\" (UID: \"012f7036-9d2e-45a6-985c-701982b85f46\") " pod="openshift-multus/multus-additional-cni-plugins-f9ndr" Apr 23 17:54:42.507881 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507860 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/cc1881ec-f1a3-4551-ac37-e01f270956dc-etc-tuned\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.508364 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507892 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c4c9afb7-fbe4-44de-b1b1-c6a1f86b72dd-serviceca\") pod \"node-ca-h9g78\" (UID: \"c4c9afb7-fbe4-44de-b1b1-c6a1f86b72dd\") " pod="openshift-image-registry/node-ca-h9g78" Apr 23 17:54:42.508364 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507911 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-host-kubelet\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.508364 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507925 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m7d5\" (UniqueName: \"kubernetes.io/projected/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-kube-api-access-8m7d5\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.508364 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507938 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/91e65909-6fc5-43ad-9403-4e762e15651f-dbus\") pod \"global-pull-secret-syncer-n95c8\" (UID: \"91e65909-6fc5-43ad-9403-4e762e15651f\") " pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:54:42.508364 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507956 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl8gx\" (UniqueName: \"kubernetes.io/projected/4b5c0501-ab5e-4cac-9c9f-f306624ec47f-kube-api-access-vl8gx\") pod \"network-check-target-9wq98\" (UID: \"4b5c0501-ab5e-4cac-9c9f-f306624ec47f\") " pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:54:42.508364 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507977 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvvwz\" (UniqueName: \"kubernetes.io/projected/095aaf33-9f06-4dd6-ab66-144f189b570f-kube-api-access-kvvwz\") pod \"iptables-alerter-4wmwg\" (UID: \"095aaf33-9f06-4dd6-ab66-144f189b570f\") " pod="openshift-network-operator/iptables-alerter-4wmwg" Apr 23 17:54:42.508364 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.507992 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/26df88a3-a37a-4023-9f9f-cce91875523b-konnectivity-ca\") pod \"konnectivity-agent-cwppv\" (UID: \"26df88a3-a37a-4023-9f9f-cce91875523b\") " pod="kube-system/konnectivity-agent-cwppv" Apr 23 17:54:42.508364 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508007 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cc1881ec-f1a3-4551-ac37-e01f270956dc-tmp\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.508364 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508033 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zdg2\" (UniqueName: \"kubernetes.io/projected/f7526a98-a284-45c2-aeb2-cce4ddcd8f45-kube-api-access-4zdg2\") pod \"network-metrics-daemon-lm6wc\" (UID: \"f7526a98-a284-45c2-aeb2-cce4ddcd8f45\") " pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:54:42.508364 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508051 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-node-log\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.508364 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508064 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-host-cni-bin\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.508364 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508077 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6-etc-selinux\") pod \"aws-ebs-csi-driver-node-9hqf9\" (UID: \"c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9hqf9" Apr 23 17:54:42.508364 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508097 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-env-overrides\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.508364 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508118 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cc1881ec-f1a3-4551-ac37-e01f270956dc-host\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.508364 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508132 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-systemd-units\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.508364 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508147 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.508893 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508172 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6-registration-dir\") pod \"aws-ebs-csi-driver-node-9hqf9\" (UID: \"c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9hqf9" Apr 23 17:54:42.508893 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508206 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctfjx\" (UniqueName: \"kubernetes.io/projected/ae6f204b-0425-4e4c-8749-41bce4ec27bd-kube-api-access-ctfjx\") pod \"node-resolver-hxdgk\" (UID: \"ae6f204b-0425-4e4c-8749-41bce4ec27bd\") " pod="openshift-dns/node-resolver-hxdgk" Apr 23 17:54:42.508893 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508225 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/cc1881ec-f1a3-4551-ac37-e01f270956dc-var-lib-kubelet\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.508893 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508241 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-host-var-lib-cni-multus\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.508893 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508262 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-run-openvswitch\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.508893 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508283 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-log-socket\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.508893 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508297 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-multus-socket-dir-parent\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.508893 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508329 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-run-ovn\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.508893 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508351 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/012f7036-9d2e-45a6-985c-701982b85f46-os-release\") pod \"multus-additional-cni-plugins-f9ndr\" (UID: \"012f7036-9d2e-45a6-985c-701982b85f46\") " pod="openshift-multus/multus-additional-cni-plugins-f9ndr" Apr 23 17:54:42.508893 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508371 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/012f7036-9d2e-45a6-985c-701982b85f46-cni-binary-copy\") pod \"multus-additional-cni-plugins-f9ndr\" (UID: \"012f7036-9d2e-45a6-985c-701982b85f46\") " pod="openshift-multus/multus-additional-cni-plugins-f9ndr" Apr 23 17:54:42.508893 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508386 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/012f7036-9d2e-45a6-985c-701982b85f46-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f9ndr\" (UID: \"012f7036-9d2e-45a6-985c-701982b85f46\") " pod="openshift-multus/multus-additional-cni-plugins-f9ndr" Apr 23 17:54:42.508893 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508417 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/012f7036-9d2e-45a6-985c-701982b85f46-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f9ndr\" (UID: \"012f7036-9d2e-45a6-985c-701982b85f46\") " pod="openshift-multus/multus-additional-cni-plugins-f9ndr" Apr 23 17:54:42.508893 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508436 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-cnibin\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.508893 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508455 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-os-release\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.508893 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508470 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/91e65909-6fc5-43ad-9403-4e762e15651f-original-pull-secret\") pod \"global-pull-secret-syncer-n95c8\" (UID: \"91e65909-6fc5-43ad-9403-4e762e15651f\") " pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:54:42.508893 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508493 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/012f7036-9d2e-45a6-985c-701982b85f46-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-f9ndr\" (UID: \"012f7036-9d2e-45a6-985c-701982b85f46\") " pod="openshift-multus/multus-additional-cni-plugins-f9ndr" Apr 23 17:54:42.509331 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508515 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/cc1881ec-f1a3-4551-ac37-e01f270956dc-etc-modprobe-d\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.509331 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508532 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-system-cni-dir\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.509331 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508553 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-host-var-lib-kubelet\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.509331 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508574 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-ovn-node-metrics-cert\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.509331 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508598 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/095aaf33-9f06-4dd6-ab66-144f189b570f-iptables-alerter-script\") pod \"iptables-alerter-4wmwg\" (UID: \"095aaf33-9f06-4dd6-ab66-144f189b570f\") " pod="openshift-network-operator/iptables-alerter-4wmwg" Apr 23 17:54:42.509331 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508623 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55qq5\" (UniqueName: \"kubernetes.io/projected/30e5a914-97b2-4c21-985a-db4f9913ea08-kube-api-access-55qq5\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.509331 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508650 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-ovnkube-script-lib\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.509331 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508671 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6-kubelet-dir\") pod \"aws-ebs-csi-driver-node-9hqf9\" (UID: \"c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9hqf9" Apr 23 17:54:42.509331 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508691 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jx2j\" (UniqueName: \"kubernetes.io/projected/c4c9afb7-fbe4-44de-b1b1-c6a1f86b72dd-kube-api-access-4jx2j\") pod \"node-ca-h9g78\" (UID: \"c4c9afb7-fbe4-44de-b1b1-c6a1f86b72dd\") " pod="openshift-image-registry/node-ca-h9g78" Apr 23 17:54:42.509331 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508711 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-host-run-k8s-cni-cncf-io\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.509331 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508726 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-multus-conf-dir\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.509331 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508739 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-host-run-netns\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.509331 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508752 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-host-cni-netd\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.509331 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508785 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6-device-dir\") pod \"aws-ebs-csi-driver-node-9hqf9\" (UID: \"c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9hqf9" Apr 23 17:54:42.509331 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508807 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6-sys-fs\") pod \"aws-ebs-csi-driver-node-9hqf9\" (UID: \"c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9hqf9" Apr 23 17:54:42.509331 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.508821 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/cc1881ec-f1a3-4551-ac37-e01f270956dc-etc-sysctl-d\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.511731 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.511689 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-proxy-ip-10-0-133-178.ec2.internal" podStartSLOduration=0.51167945 podStartE2EDuration="511.67945ms" podCreationTimestamp="2026-04-23 17:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:54:42.511470405 +0000 UTC m=+157.706375132" watchObservedRunningTime="2026-04-23 17:54:42.51167945 +0000 UTC m=+157.706584176" Apr 23 17:54:42.609695 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.609618 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4jx2j\" (UniqueName: \"kubernetes.io/projected/c4c9afb7-fbe4-44de-b1b1-c6a1f86b72dd-kube-api-access-4jx2j\") pod \"node-ca-h9g78\" (UID: \"c4c9afb7-fbe4-44de-b1b1-c6a1f86b72dd\") " pod="openshift-image-registry/node-ca-h9g78" Apr 23 17:54:42.609695 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.609646 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-host-run-k8s-cni-cncf-io\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.609695 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.609663 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-multus-conf-dir\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.609695 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.609680 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-host-run-netns\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.609970 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.609714 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-host-cni-netd\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.609970 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.609733 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6-device-dir\") pod \"aws-ebs-csi-driver-node-9hqf9\" (UID: \"c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9hqf9" Apr 23 17:54:42.609970 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.609741 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-host-run-k8s-cni-cncf-io\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.609970 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.609750 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6-sys-fs\") pod \"aws-ebs-csi-driver-node-9hqf9\" (UID: \"c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9hqf9" Apr 23 17:54:42.609970 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.609794 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/cc1881ec-f1a3-4551-ac37-e01f270956dc-etc-sysctl-d\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.609970 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.609806 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-host-cni-netd\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.609970 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.609817 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cc1881ec-f1a3-4551-ac37-e01f270956dc-sys\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.609970 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.609835 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-host-run-netns\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.609970 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.609849 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cc1881ec-f1a3-4551-ac37-e01f270956dc-sys\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.609970 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.609845 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6-device-dir\") pod \"aws-ebs-csi-driver-node-9hqf9\" (UID: \"c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9hqf9" Apr 23 17:54:42.609970 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.609843 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc1881ec-f1a3-4551-ac37-e01f270956dc-lib-modules\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.609970 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.609873 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-multus-conf-dir\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.609970 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.609900 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-multus-cni-dir\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.609970 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.609806 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6-sys-fs\") pod \"aws-ebs-csi-driver-node-9hqf9\" (UID: \"c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9hqf9" Apr 23 17:54:42.609970 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.609915 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/cc1881ec-f1a3-4551-ac37-e01f270956dc-etc-sysctl-d\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.609970 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.609930 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc1881ec-f1a3-4551-ac37-e01f270956dc-lib-modules\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.609970 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.609925 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-host-run-multus-certs\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.609970 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.609962 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/91e65909-6fc5-43ad-9403-4e762e15651f-kubelet-config\") pod \"global-pull-secret-syncer-n95c8\" (UID: \"91e65909-6fc5-43ad-9403-4e762e15651f\") " pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:54:42.610781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.609973 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-multus-cni-dir\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.610781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.609986 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6-socket-dir\") pod \"aws-ebs-csi-driver-node-9hqf9\" (UID: \"c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9hqf9" Apr 23 17:54:42.610781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.609964 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-host-run-multus-certs\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.610781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.609995 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/91e65909-6fc5-43ad-9403-4e762e15651f-kubelet-config\") pod \"global-pull-secret-syncer-n95c8\" (UID: \"91e65909-6fc5-43ad-9403-4e762e15651f\") " pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:54:42.610781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610012 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c8l9t\" (UniqueName: \"kubernetes.io/projected/c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6-kube-api-access-c8l9t\") pod \"aws-ebs-csi-driver-node-9hqf9\" (UID: \"c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9hqf9" Apr 23 17:54:42.610781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610029 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc1881ec-f1a3-4551-ac37-e01f270956dc-etc-kubernetes\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.610781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610045 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/cc1881ec-f1a3-4551-ac37-e01f270956dc-etc-systemd\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.610781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610062 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-hostroot\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.610781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610083 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc1881ec-f1a3-4551-ac37-e01f270956dc-etc-kubernetes\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.610781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610086 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-ovnkube-config\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.610781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610110 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ae6f204b-0425-4e4c-8749-41bce4ec27bd-hosts-file\") pod \"node-resolver-hxdgk\" (UID: \"ae6f204b-0425-4e4c-8749-41bce4ec27bd\") " pod="openshift-dns/node-resolver-hxdgk" Apr 23 17:54:42.610781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610111 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/cc1881ec-f1a3-4551-ac37-e01f270956dc-etc-systemd\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.610781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610140 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-hostroot\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.610781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610149 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ae6f204b-0425-4e4c-8749-41bce4ec27bd-hosts-file\") pod \"node-resolver-hxdgk\" (UID: \"ae6f204b-0425-4e4c-8749-41bce4ec27bd\") " pod="openshift-dns/node-resolver-hxdgk" Apr 23 17:54:42.610781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610172 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ae6f204b-0425-4e4c-8749-41bce4ec27bd-tmp-dir\") pod \"node-resolver-hxdgk\" (UID: \"ae6f204b-0425-4e4c-8749-41bce4ec27bd\") " pod="openshift-dns/node-resolver-hxdgk" Apr 23 17:54:42.610781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610193 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/095aaf33-9f06-4dd6-ab66-144f189b570f-host-slash\") pod \"iptables-alerter-4wmwg\" (UID: \"095aaf33-9f06-4dd6-ab66-144f189b570f\") " pod="openshift-network-operator/iptables-alerter-4wmwg" Apr 23 17:54:42.610781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610218 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/012f7036-9d2e-45a6-985c-701982b85f46-system-cni-dir\") pod \"multus-additional-cni-plugins-f9ndr\" (UID: \"012f7036-9d2e-45a6-985c-701982b85f46\") " pod="openshift-multus/multus-additional-cni-plugins-f9ndr" Apr 23 17:54:42.610781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610242 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l4rrl\" (UniqueName: \"kubernetes.io/projected/cc1881ec-f1a3-4551-ac37-e01f270956dc-kube-api-access-l4rrl\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.611631 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610247 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6-socket-dir\") pod \"aws-ebs-csi-driver-node-9hqf9\" (UID: \"c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9hqf9" Apr 23 17:54:42.611631 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610242 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/095aaf33-9f06-4dd6-ab66-144f189b570f-host-slash\") pod \"iptables-alerter-4wmwg\" (UID: \"095aaf33-9f06-4dd6-ab66-144f189b570f\") " pod="openshift-network-operator/iptables-alerter-4wmwg" Apr 23 17:54:42.611631 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610268 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-host-run-netns\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.611631 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610271 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/012f7036-9d2e-45a6-985c-701982b85f46-system-cni-dir\") pod \"multus-additional-cni-plugins-f9ndr\" (UID: \"012f7036-9d2e-45a6-985c-701982b85f46\") " pod="openshift-multus/multus-additional-cni-plugins-f9ndr" Apr 23 17:54:42.611631 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610303 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7526a98-a284-45c2-aeb2-cce4ddcd8f45-metrics-certs\") pod \"network-metrics-daemon-lm6wc\" (UID: \"f7526a98-a284-45c2-aeb2-cce4ddcd8f45\") " pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:54:42.611631 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610328 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-run-systemd\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.611631 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610306 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-host-run-netns\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.611631 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610351 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cc1881ec-f1a3-4551-ac37-e01f270956dc-run\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.611631 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610374 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c4c9afb7-fbe4-44de-b1b1-c6a1f86b72dd-host\") pod \"node-ca-h9g78\" (UID: \"c4c9afb7-fbe4-44de-b1b1-c6a1f86b72dd\") " pod="openshift-image-registry/node-ca-h9g78" Apr 23 17:54:42.611631 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:42.610385 2572 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:42.611631 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610395 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cc1881ec-f1a3-4551-ac37-e01f270956dc-run\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.611631 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610389 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-run-systemd\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.611631 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610411 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/30e5a914-97b2-4c21-985a-db4f9913ea08-cni-binary-copy\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.611631 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610451 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c4c9afb7-fbe4-44de-b1b1-c6a1f86b72dd-host\") pod \"node-ca-h9g78\" (UID: \"c4c9afb7-fbe4-44de-b1b1-c6a1f86b72dd\") " pod="openshift-image-registry/node-ca-h9g78" Apr 23 17:54:42.611631 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:42.610487 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7526a98-a284-45c2-aeb2-cce4ddcd8f45-metrics-certs podName:f7526a98-a284-45c2-aeb2-cce4ddcd8f45 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:43.110454286 +0000 UTC m=+158.305359003 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f7526a98-a284-45c2-aeb2-cce4ddcd8f45-metrics-certs") pod "network-metrics-daemon-lm6wc" (UID: "f7526a98-a284-45c2-aeb2-cce4ddcd8f45") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:42.611631 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610508 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ae6f204b-0425-4e4c-8749-41bce4ec27bd-tmp-dir\") pod \"node-resolver-hxdgk\" (UID: \"ae6f204b-0425-4e4c-8749-41bce4ec27bd\") " pod="openshift-dns/node-resolver-hxdgk" Apr 23 17:54:42.611631 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610548 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-var-lib-openvswitch\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.611631 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610508 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-var-lib-openvswitch\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.612386 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610584 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-etc-openvswitch\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.612386 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610602 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-host-run-ovn-kubernetes\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.612386 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610629 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-etc-openvswitch\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.612386 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610644 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-host-run-ovn-kubernetes\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.612386 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610635 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mrfsk\" (UniqueName: \"kubernetes.io/projected/012f7036-9d2e-45a6-985c-701982b85f46-kube-api-access-mrfsk\") pod \"multus-additional-cni-plugins-f9ndr\" (UID: \"012f7036-9d2e-45a6-985c-701982b85f46\") " pod="openshift-multus/multus-additional-cni-plugins-f9ndr" Apr 23 17:54:42.612386 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610685 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/cc1881ec-f1a3-4551-ac37-e01f270956dc-etc-sysconfig\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.612386 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610708 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-host-slash\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.612386 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610728 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-ovnkube-config\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.612386 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610737 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/cc1881ec-f1a3-4551-ac37-e01f270956dc-etc-sysconfig\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.612386 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610730 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/cc1881ec-f1a3-4551-ac37-e01f270956dc-etc-sysctl-conf\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.612386 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610774 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-host-slash\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.612386 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610796 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-host-var-lib-cni-bin\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.612386 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610817 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/30e5a914-97b2-4c21-985a-db4f9913ea08-multus-daemon-config\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.612386 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610832 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-etc-kubernetes\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.612386 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610843 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/cc1881ec-f1a3-4551-ac37-e01f270956dc-etc-sysctl-conf\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.612386 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610847 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-host-var-lib-cni-bin\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.612386 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610849 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/26df88a3-a37a-4023-9f9f-cce91875523b-agent-certs\") pod \"konnectivity-agent-cwppv\" (UID: \"26df88a3-a37a-4023-9f9f-cce91875523b\") " pod="kube-system/konnectivity-agent-cwppv" Apr 23 17:54:42.613050 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610885 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/012f7036-9d2e-45a6-985c-701982b85f46-cnibin\") pod \"multus-additional-cni-plugins-f9ndr\" (UID: \"012f7036-9d2e-45a6-985c-701982b85f46\") " pod="openshift-multus/multus-additional-cni-plugins-f9ndr" Apr 23 17:54:42.613050 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610923 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/30e5a914-97b2-4c21-985a-db4f9913ea08-cni-binary-copy\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.613050 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610908 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/cc1881ec-f1a3-4551-ac37-e01f270956dc-etc-tuned\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.613050 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610956 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c4c9afb7-fbe4-44de-b1b1-c6a1f86b72dd-serviceca\") pod \"node-ca-h9g78\" (UID: \"c4c9afb7-fbe4-44de-b1b1-c6a1f86b72dd\") " pod="openshift-image-registry/node-ca-h9g78" Apr 23 17:54:42.613050 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610961 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-etc-kubernetes\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.613050 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610973 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/012f7036-9d2e-45a6-985c-701982b85f46-cnibin\") pod \"multus-additional-cni-plugins-f9ndr\" (UID: \"012f7036-9d2e-45a6-985c-701982b85f46\") " pod="openshift-multus/multus-additional-cni-plugins-f9ndr" Apr 23 17:54:42.613050 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.610981 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-host-kubelet\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.613050 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611012 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8m7d5\" (UniqueName: \"kubernetes.io/projected/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-kube-api-access-8m7d5\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.613050 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611038 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/91e65909-6fc5-43ad-9403-4e762e15651f-dbus\") pod \"global-pull-secret-syncer-n95c8\" (UID: \"91e65909-6fc5-43ad-9403-4e762e15651f\") " pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:54:42.613050 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611061 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vl8gx\" (UniqueName: \"kubernetes.io/projected/4b5c0501-ab5e-4cac-9c9f-f306624ec47f-kube-api-access-vl8gx\") pod \"network-check-target-9wq98\" (UID: \"4b5c0501-ab5e-4cac-9c9f-f306624ec47f\") " pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:54:42.613050 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611065 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-host-kubelet\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.613050 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611091 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kvvwz\" (UniqueName: \"kubernetes.io/projected/095aaf33-9f06-4dd6-ab66-144f189b570f-kube-api-access-kvvwz\") pod \"iptables-alerter-4wmwg\" (UID: \"095aaf33-9f06-4dd6-ab66-144f189b570f\") " pod="openshift-network-operator/iptables-alerter-4wmwg" Apr 23 17:54:42.613050 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611131 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/26df88a3-a37a-4023-9f9f-cce91875523b-konnectivity-ca\") pod \"konnectivity-agent-cwppv\" (UID: \"26df88a3-a37a-4023-9f9f-cce91875523b\") " pod="kube-system/konnectivity-agent-cwppv" Apr 23 17:54:42.613050 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611150 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cc1881ec-f1a3-4551-ac37-e01f270956dc-tmp\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.613050 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611169 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4zdg2\" (UniqueName: \"kubernetes.io/projected/f7526a98-a284-45c2-aeb2-cce4ddcd8f45-kube-api-access-4zdg2\") pod \"network-metrics-daemon-lm6wc\" (UID: \"f7526a98-a284-45c2-aeb2-cce4ddcd8f45\") " pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:54:42.613050 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611177 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/91e65909-6fc5-43ad-9403-4e762e15651f-dbus\") pod \"global-pull-secret-syncer-n95c8\" (UID: \"91e65909-6fc5-43ad-9403-4e762e15651f\") " pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:54:42.613050 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611201 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-node-log\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.613858 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611237 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/30e5a914-97b2-4c21-985a-db4f9913ea08-multus-daemon-config\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.613858 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611246 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-host-cni-bin\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.613858 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611268 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6-etc-selinux\") pod \"aws-ebs-csi-driver-node-9hqf9\" (UID: \"c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9hqf9" Apr 23 17:54:42.613858 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611290 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-env-overrides\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.613858 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611310 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cc1881ec-f1a3-4551-ac37-e01f270956dc-host\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.613858 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611352 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-systemd-units\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.613858 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611375 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.613858 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611417 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6-registration-dir\") pod \"aws-ebs-csi-driver-node-9hqf9\" (UID: \"c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9hqf9" Apr 23 17:54:42.613858 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611434 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c4c9afb7-fbe4-44de-b1b1-c6a1f86b72dd-serviceca\") pod \"node-ca-h9g78\" (UID: \"c4c9afb7-fbe4-44de-b1b1-c6a1f86b72dd\") " pod="openshift-image-registry/node-ca-h9g78" Apr 23 17:54:42.613858 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611450 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6-etc-selinux\") pod \"aws-ebs-csi-driver-node-9hqf9\" (UID: \"c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9hqf9" Apr 23 17:54:42.613858 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611441 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ctfjx\" (UniqueName: \"kubernetes.io/projected/ae6f204b-0425-4e4c-8749-41bce4ec27bd-kube-api-access-ctfjx\") pod \"node-resolver-hxdgk\" (UID: \"ae6f204b-0425-4e4c-8749-41bce4ec27bd\") " pod="openshift-dns/node-resolver-hxdgk" Apr 23 17:54:42.613858 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611470 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cc1881ec-f1a3-4551-ac37-e01f270956dc-host\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.613858 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611493 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-systemd-units\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.613858 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611508 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-node-log\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.613858 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611515 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/cc1881ec-f1a3-4551-ac37-e01f270956dc-var-lib-kubelet\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.613858 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611491 2572 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Apr 23 17:54:42.613858 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611529 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-host-cni-bin\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.613858 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611546 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-host-var-lib-cni-multus\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.614781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611557 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/cc1881ec-f1a3-4551-ac37-e01f270956dc-var-lib-kubelet\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.614781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611574 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-run-openvswitch\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.614781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611592 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-host-var-lib-cni-multus\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.614781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611598 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-log-socket\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.614781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611603 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.614781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611634 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-run-openvswitch\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.614781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611639 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-log-socket\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.614781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611658 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6-registration-dir\") pod \"aws-ebs-csi-driver-node-9hqf9\" (UID: \"c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9hqf9" Apr 23 17:54:42.614781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611663 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-multus-socket-dir-parent\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.614781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611685 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/26df88a3-a37a-4023-9f9f-cce91875523b-konnectivity-ca\") pod \"konnectivity-agent-cwppv\" (UID: \"26df88a3-a37a-4023-9f9f-cce91875523b\") " pod="kube-system/konnectivity-agent-cwppv" Apr 23 17:54:42.614781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611691 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-run-ovn\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.614781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611710 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-run-ovn\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.614781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611735 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/012f7036-9d2e-45a6-985c-701982b85f46-os-release\") pod \"multus-additional-cni-plugins-f9ndr\" (UID: \"012f7036-9d2e-45a6-985c-701982b85f46\") " pod="openshift-multus/multus-additional-cni-plugins-f9ndr" Apr 23 17:54:42.614781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611758 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/012f7036-9d2e-45a6-985c-701982b85f46-cni-binary-copy\") pod \"multus-additional-cni-plugins-f9ndr\" (UID: \"012f7036-9d2e-45a6-985c-701982b85f46\") " pod="openshift-multus/multus-additional-cni-plugins-f9ndr" Apr 23 17:54:42.614781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611773 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/012f7036-9d2e-45a6-985c-701982b85f46-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f9ndr\" (UID: \"012f7036-9d2e-45a6-985c-701982b85f46\") " pod="openshift-multus/multus-additional-cni-plugins-f9ndr" Apr 23 17:54:42.614781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611771 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-multus-socket-dir-parent\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.614781 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611792 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/012f7036-9d2e-45a6-985c-701982b85f46-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f9ndr\" (UID: \"012f7036-9d2e-45a6-985c-701982b85f46\") " pod="openshift-multus/multus-additional-cni-plugins-f9ndr" Apr 23 17:54:42.615456 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611819 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-cnibin\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.615456 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611834 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-os-release\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.615456 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611850 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/91e65909-6fc5-43ad-9403-4e762e15651f-original-pull-secret\") pod \"global-pull-secret-syncer-n95c8\" (UID: \"91e65909-6fc5-43ad-9403-4e762e15651f\") " pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:54:42.615456 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611860 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/012f7036-9d2e-45a6-985c-701982b85f46-os-release\") pod \"multus-additional-cni-plugins-f9ndr\" (UID: \"012f7036-9d2e-45a6-985c-701982b85f46\") " pod="openshift-multus/multus-additional-cni-plugins-f9ndr" Apr 23 17:54:42.615456 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611868 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-env-overrides\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.615456 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611869 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/012f7036-9d2e-45a6-985c-701982b85f46-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-f9ndr\" (UID: \"012f7036-9d2e-45a6-985c-701982b85f46\") " pod="openshift-multus/multus-additional-cni-plugins-f9ndr" Apr 23 17:54:42.615456 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611895 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/012f7036-9d2e-45a6-985c-701982b85f46-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f9ndr\" (UID: \"012f7036-9d2e-45a6-985c-701982b85f46\") " pod="openshift-multus/multus-additional-cni-plugins-f9ndr" Apr 23 17:54:42.615456 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611910 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-cnibin\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.615456 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611912 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-os-release\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.615456 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611939 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/cc1881ec-f1a3-4551-ac37-e01f270956dc-etc-modprobe-d\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.615456 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:42.611984 2572 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:54:42.615456 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.611986 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-system-cni-dir\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.615456 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.612014 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-host-var-lib-kubelet\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.615456 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:42.612037 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91e65909-6fc5-43ad-9403-4e762e15651f-original-pull-secret podName:91e65909-6fc5-43ad-9403-4e762e15651f nodeName:}" failed. No retries permitted until 2026-04-23 17:54:43.112021331 +0000 UTC m=+158.306926255 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/91e65909-6fc5-43ad-9403-4e762e15651f-original-pull-secret") pod "global-pull-secret-syncer-n95c8" (UID: "91e65909-6fc5-43ad-9403-4e762e15651f") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:54:42.615456 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.612052 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-host-var-lib-kubelet\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.615456 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.612064 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-ovn-node-metrics-cert\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.615456 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.612089 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/095aaf33-9f06-4dd6-ab66-144f189b570f-iptables-alerter-script\") pod \"iptables-alerter-4wmwg\" (UID: \"095aaf33-9f06-4dd6-ab66-144f189b570f\") " pod="openshift-network-operator/iptables-alerter-4wmwg" Apr 23 17:54:42.615946 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.612115 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-55qq5\" (UniqueName: \"kubernetes.io/projected/30e5a914-97b2-4c21-985a-db4f9913ea08-kube-api-access-55qq5\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.615946 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.612140 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-ovnkube-script-lib\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.615946 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.612151 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/cc1881ec-f1a3-4551-ac37-e01f270956dc-etc-modprobe-d\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.615946 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.612164 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6-kubelet-dir\") pod \"aws-ebs-csi-driver-node-9hqf9\" (UID: \"c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9hqf9" Apr 23 17:54:42.615946 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.612196 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/30e5a914-97b2-4c21-985a-db4f9913ea08-system-cni-dir\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.615946 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.612233 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6-kubelet-dir\") pod \"aws-ebs-csi-driver-node-9hqf9\" (UID: \"c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9hqf9" Apr 23 17:54:42.615946 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.612337 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/012f7036-9d2e-45a6-985c-701982b85f46-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f9ndr\" (UID: \"012f7036-9d2e-45a6-985c-701982b85f46\") " pod="openshift-multus/multus-additional-cni-plugins-f9ndr" Apr 23 17:54:42.615946 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.612374 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/012f7036-9d2e-45a6-985c-701982b85f46-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-f9ndr\" (UID: \"012f7036-9d2e-45a6-985c-701982b85f46\") " pod="openshift-multus/multus-additional-cni-plugins-f9ndr" Apr 23 17:54:42.615946 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.612421 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/012f7036-9d2e-45a6-985c-701982b85f46-cni-binary-copy\") pod \"multus-additional-cni-plugins-f9ndr\" (UID: \"012f7036-9d2e-45a6-985c-701982b85f46\") " pod="openshift-multus/multus-additional-cni-plugins-f9ndr" Apr 23 17:54:42.615946 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.612667 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/095aaf33-9f06-4dd6-ab66-144f189b570f-iptables-alerter-script\") pod \"iptables-alerter-4wmwg\" (UID: \"095aaf33-9f06-4dd6-ab66-144f189b570f\") " pod="openshift-network-operator/iptables-alerter-4wmwg" Apr 23 17:54:42.615946 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.613097 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-ovnkube-script-lib\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.615946 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.615302 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/cc1881ec-f1a3-4551-ac37-e01f270956dc-etc-tuned\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.615946 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.615359 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cc1881ec-f1a3-4551-ac37-e01f270956dc-tmp\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.615946 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.615526 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-ovn-node-metrics-cert\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.615946 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.615621 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/26df88a3-a37a-4023-9f9f-cce91875523b-agent-certs\") pod \"konnectivity-agent-cwppv\" (UID: \"26df88a3-a37a-4023-9f9f-cce91875523b\") " pod="kube-system/konnectivity-agent-cwppv" Apr 23 17:54:42.620733 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:42.620718 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:54:42.620805 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:42.620735 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:54:42.620805 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:42.620745 2572 projected.go:194] Error preparing data for projected volume kube-api-access-vl8gx for pod openshift-network-diagnostics/network-check-target-9wq98: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:42.620805 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:42.620791 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4b5c0501-ab5e-4cac-9c9f-f306624ec47f-kube-api-access-vl8gx podName:4b5c0501-ab5e-4cac-9c9f-f306624ec47f nodeName:}" failed. No retries permitted until 2026-04-23 17:54:43.120781086 +0000 UTC m=+158.315685796 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vl8gx" (UniqueName: "kubernetes.io/projected/4b5c0501-ab5e-4cac-9c9f-f306624ec47f-kube-api-access-vl8gx") pod "network-check-target-9wq98" (UID: "4b5c0501-ab5e-4cac-9c9f-f306624ec47f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:42.623705 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.623642 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrfsk\" (UniqueName: \"kubernetes.io/projected/012f7036-9d2e-45a6-985c-701982b85f46-kube-api-access-mrfsk\") pod \"multus-additional-cni-plugins-f9ndr\" (UID: \"012f7036-9d2e-45a6-985c-701982b85f46\") " pod="openshift-multus/multus-additional-cni-plugins-f9ndr" Apr 23 17:54:42.623819 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.623750 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jx2j\" (UniqueName: \"kubernetes.io/projected/c4c9afb7-fbe4-44de-b1b1-c6a1f86b72dd-kube-api-access-4jx2j\") pod \"node-ca-h9g78\" (UID: \"c4c9afb7-fbe4-44de-b1b1-c6a1f86b72dd\") " pod="openshift-image-registry/node-ca-h9g78" Apr 23 17:54:42.624298 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.624255 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4rrl\" (UniqueName: \"kubernetes.io/projected/cc1881ec-f1a3-4551-ac37-e01f270956dc-kube-api-access-l4rrl\") pod \"tuned-9455k\" (UID: \"cc1881ec-f1a3-4551-ac37-e01f270956dc\") " pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.624615 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.624594 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvvwz\" (UniqueName: \"kubernetes.io/projected/095aaf33-9f06-4dd6-ab66-144f189b570f-kube-api-access-kvvwz\") pod \"iptables-alerter-4wmwg\" (UID: \"095aaf33-9f06-4dd6-ab66-144f189b570f\") " pod="openshift-network-operator/iptables-alerter-4wmwg" Apr 23 17:54:42.625170 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.625141 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctfjx\" (UniqueName: \"kubernetes.io/projected/ae6f204b-0425-4e4c-8749-41bce4ec27bd-kube-api-access-ctfjx\") pod \"node-resolver-hxdgk\" (UID: \"ae6f204b-0425-4e4c-8749-41bce4ec27bd\") " pod="openshift-dns/node-resolver-hxdgk" Apr 23 17:54:42.625322 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.625303 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zdg2\" (UniqueName: \"kubernetes.io/projected/f7526a98-a284-45c2-aeb2-cce4ddcd8f45-kube-api-access-4zdg2\") pod \"network-metrics-daemon-lm6wc\" (UID: \"f7526a98-a284-45c2-aeb2-cce4ddcd8f45\") " pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:54:42.625433 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.625415 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-55qq5\" (UniqueName: \"kubernetes.io/projected/30e5a914-97b2-4c21-985a-db4f9913ea08-kube-api-access-55qq5\") pod \"multus-6brjb\" (UID: \"30e5a914-97b2-4c21-985a-db4f9913ea08\") " pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.625952 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.625933 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8l9t\" (UniqueName: \"kubernetes.io/projected/c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6-kube-api-access-c8l9t\") pod \"aws-ebs-csi-driver-node-9hqf9\" (UID: \"c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9hqf9" Apr 23 17:54:42.626380 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.626361 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m7d5\" (UniqueName: \"kubernetes.io/projected/3c2da17f-0591-4850-9fa2-fde2a8c1a8d5-kube-api-access-8m7d5\") pod \"ovnkube-node-v9pcc\" (UID: \"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.645262 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.645243 2572 scope.go:117] "RemoveContainer" containerID="be862fbb1a27d116f989acff7cdaaa399cb017863e17162d45233d74a9f7b96c" Apr 23 17:54:42.645440 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:42.645424 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_openshift-machine-config-operator(2e27a1d033408744b4b8c34c52f01b43)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal" podUID="2e27a1d033408744b4b8c34c52f01b43" Apr 23 17:54:42.695739 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.695714 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9hqf9" Apr 23 17:54:42.701390 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.701361 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hxdgk" Apr 23 17:54:42.707143 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.707020 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-cwppv" Apr 23 17:54:42.707994 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:54:42.707969 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae6f204b_0425_4e4c_8749_41bce4ec27bd.slice/crio-301aefcf9b4223b50a51dea67a1fc16c0ac55795a430ee59a410e3e655146760 WatchSource:0}: Error finding container 301aefcf9b4223b50a51dea67a1fc16c0ac55795a430ee59a410e3e655146760: Status 404 returned error can't find the container with id 301aefcf9b4223b50a51dea67a1fc16c0ac55795a430ee59a410e3e655146760 Apr 23 17:54:42.712582 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.712553 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4wmwg" Apr 23 17:54:42.714204 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:54:42.714182 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26df88a3_a37a_4023_9f9f_cce91875523b.slice/crio-68ebf9d45a66c3d7c3ac2c57b92386213cb3f39ab75f8a75d607f48681374cd8 WatchSource:0}: Error finding container 68ebf9d45a66c3d7c3ac2c57b92386213cb3f39ab75f8a75d607f48681374cd8: Status 404 returned error can't find the container with id 68ebf9d45a66c3d7c3ac2c57b92386213cb3f39ab75f8a75d607f48681374cd8 Apr 23 17:54:42.717333 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.717317 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-9455k" Apr 23 17:54:42.718545 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:54:42.718513 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod095aaf33_9f06_4dd6_ab66_144f189b570f.slice/crio-d94a357b53c89747b137079a9708a70592eb3b5f01f35fc7d54729e1f6071eb9 WatchSource:0}: Error finding container d94a357b53c89747b137079a9708a70592eb3b5f01f35fc7d54729e1f6071eb9: Status 404 returned error can't find the container with id d94a357b53c89747b137079a9708a70592eb3b5f01f35fc7d54729e1f6071eb9 Apr 23 17:54:42.723326 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.723310 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-h9g78" Apr 23 17:54:42.723899 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:54:42.723884 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc1881ec_f1a3_4551_ac37_e01f270956dc.slice/crio-19e6e33790e3147efdc1dac42f66666d7794d8ec89ecfe3ded16ea6b841725d3 WatchSource:0}: Error finding container 19e6e33790e3147efdc1dac42f66666d7794d8ec89ecfe3ded16ea6b841725d3: Status 404 returned error can't find the container with id 19e6e33790e3147efdc1dac42f66666d7794d8ec89ecfe3ded16ea6b841725d3 Apr 23 17:54:42.728019 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.728002 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-6brjb" Apr 23 17:54:42.729557 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:54:42.729536 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4c9afb7_fbe4_44de_b1b1_c6a1f86b72dd.slice/crio-555f2cf9a5d1806ef767417f08cc7b2558d4e09ac5068df0c94662d9dd3bfe9d WatchSource:0}: Error finding container 555f2cf9a5d1806ef767417f08cc7b2558d4e09ac5068df0c94662d9dd3bfe9d: Status 404 returned error can't find the container with id 555f2cf9a5d1806ef767417f08cc7b2558d4e09ac5068df0c94662d9dd3bfe9d Apr 23 17:54:42.733746 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.733727 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-f9ndr" Apr 23 17:54:42.733949 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:54:42.733927 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30e5a914_97b2_4c21_985a_db4f9913ea08.slice/crio-56fbfa30102b943c26f40838530a952a36ea7273675ab7cd97bd59837e071b85 WatchSource:0}: Error finding container 56fbfa30102b943c26f40838530a952a36ea7273675ab7cd97bd59837e071b85: Status 404 returned error can't find the container with id 56fbfa30102b943c26f40838530a952a36ea7273675ab7cd97bd59837e071b85 Apr 23 17:54:42.739063 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:42.739043 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:54:42.740929 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:54:42.740908 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod012f7036_9d2e_45a6_985c_701982b85f46.slice/crio-bd648b4d4b5403006107d6c82c58ddd2d4c66a83efb6e8a1ed828f762d720f84 WatchSource:0}: Error finding container bd648b4d4b5403006107d6c82c58ddd2d4c66a83efb6e8a1ed828f762d720f84: Status 404 returned error can't find the container with id bd648b4d4b5403006107d6c82c58ddd2d4c66a83efb6e8a1ed828f762d720f84 Apr 23 17:54:42.746522 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:54:42.746503 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c2da17f_0591_4850_9fa2_fde2a8c1a8d5.slice/crio-205481466bbbdc895d58d025253b44ba816f8303b2842be7ff0069be7773416f WatchSource:0}: Error finding container 205481466bbbdc895d58d025253b44ba816f8303b2842be7ff0069be7773416f: Status 404 returned error can't find the container with id 205481466bbbdc895d58d025253b44ba816f8303b2842be7ff0069be7773416f Apr 23 17:54:43.115077 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:43.115039 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/91e65909-6fc5-43ad-9403-4e762e15651f-original-pull-secret\") pod \"global-pull-secret-syncer-n95c8\" (UID: \"91e65909-6fc5-43ad-9403-4e762e15651f\") " pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:54:43.115256 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:43.115111 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7526a98-a284-45c2-aeb2-cce4ddcd8f45-metrics-certs\") pod \"network-metrics-daemon-lm6wc\" (UID: \"f7526a98-a284-45c2-aeb2-cce4ddcd8f45\") " pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:54:43.115256 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:43.115231 2572 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:43.115370 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:43.115296 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7526a98-a284-45c2-aeb2-cce4ddcd8f45-metrics-certs podName:f7526a98-a284-45c2-aeb2-cce4ddcd8f45 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:44.115276473 +0000 UTC m=+159.310181200 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f7526a98-a284-45c2-aeb2-cce4ddcd8f45-metrics-certs") pod "network-metrics-daemon-lm6wc" (UID: "f7526a98-a284-45c2-aeb2-cce4ddcd8f45") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:43.115631 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:43.115489 2572 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:54:43.115631 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:43.115532 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91e65909-6fc5-43ad-9403-4e762e15651f-original-pull-secret podName:91e65909-6fc5-43ad-9403-4e762e15651f nodeName:}" failed. No retries permitted until 2026-04-23 17:54:44.115519802 +0000 UTC m=+159.310424525 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/91e65909-6fc5-43ad-9403-4e762e15651f-original-pull-secret") pod "global-pull-secret-syncer-n95c8" (UID: "91e65909-6fc5-43ad-9403-4e762e15651f") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:54:43.215998 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:43.215960 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vl8gx\" (UniqueName: \"kubernetes.io/projected/4b5c0501-ab5e-4cac-9c9f-f306624ec47f-kube-api-access-vl8gx\") pod \"network-check-target-9wq98\" (UID: \"4b5c0501-ab5e-4cac-9c9f-f306624ec47f\") " pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:54:43.216169 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:43.216126 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:54:43.216169 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:43.216148 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:54:43.216169 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:43.216161 2572 projected.go:194] Error preparing data for projected volume kube-api-access-vl8gx for pod openshift-network-diagnostics/network-check-target-9wq98: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:43.216334 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:43.216219 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4b5c0501-ab5e-4cac-9c9f-f306624ec47f-kube-api-access-vl8gx podName:4b5c0501-ab5e-4cac-9c9f-f306624ec47f nodeName:}" failed. No retries permitted until 2026-04-23 17:54:44.216200951 +0000 UTC m=+159.411105679 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-vl8gx" (UniqueName: "kubernetes.io/projected/4b5c0501-ab5e-4cac-9c9f-f306624ec47f-kube-api-access-vl8gx") pod "network-check-target-9wq98" (UID: "4b5c0501-ab5e-4cac-9c9f-f306624ec47f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:43.668584 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:43.668546 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f9ndr" event={"ID":"012f7036-9d2e-45a6-985c-701982b85f46","Type":"ContainerStarted","Data":"bd648b4d4b5403006107d6c82c58ddd2d4c66a83efb6e8a1ed828f762d720f84"} Apr 23 17:54:43.680384 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:43.680347 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6brjb" event={"ID":"30e5a914-97b2-4c21-985a-db4f9913ea08","Type":"ContainerStarted","Data":"56fbfa30102b943c26f40838530a952a36ea7273675ab7cd97bd59837e071b85"} Apr 23 17:54:43.688144 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:43.688071 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-h9g78" event={"ID":"c4c9afb7-fbe4-44de-b1b1-c6a1f86b72dd","Type":"ContainerStarted","Data":"555f2cf9a5d1806ef767417f08cc7b2558d4e09ac5068df0c94662d9dd3bfe9d"} Apr 23 17:54:43.690866 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:43.690792 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hxdgk" event={"ID":"ae6f204b-0425-4e4c-8749-41bce4ec27bd","Type":"ContainerStarted","Data":"301aefcf9b4223b50a51dea67a1fc16c0ac55795a430ee59a410e3e655146760"} Apr 23 17:54:43.693646 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:43.693580 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" event={"ID":"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5","Type":"ContainerStarted","Data":"205481466bbbdc895d58d025253b44ba816f8303b2842be7ff0069be7773416f"} Apr 23 17:54:43.698992 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:43.698920 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-9455k" event={"ID":"cc1881ec-f1a3-4551-ac37-e01f270956dc","Type":"ContainerStarted","Data":"19e6e33790e3147efdc1dac42f66666d7794d8ec89ecfe3ded16ea6b841725d3"} Apr 23 17:54:43.708622 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:43.708553 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4wmwg" event={"ID":"095aaf33-9f06-4dd6-ab66-144f189b570f","Type":"ContainerStarted","Data":"d94a357b53c89747b137079a9708a70592eb3b5f01f35fc7d54729e1f6071eb9"} Apr 23 17:54:43.711759 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:43.711724 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-cwppv" event={"ID":"26df88a3-a37a-4023-9f9f-cce91875523b","Type":"ContainerStarted","Data":"68ebf9d45a66c3d7c3ac2c57b92386213cb3f39ab75f8a75d607f48681374cd8"} Apr 23 17:54:43.715998 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:43.715969 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9hqf9" event={"ID":"c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6","Type":"ContainerStarted","Data":"cd79d71075d3e63274931a95d18c5389a501bbfe40997fc5353c25f79c5c2e08"} Apr 23 17:54:44.149142 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:44.148344 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/91e65909-6fc5-43ad-9403-4e762e15651f-original-pull-secret\") pod \"global-pull-secret-syncer-n95c8\" (UID: \"91e65909-6fc5-43ad-9403-4e762e15651f\") " pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:54:44.149142 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:44.148423 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7526a98-a284-45c2-aeb2-cce4ddcd8f45-metrics-certs\") pod \"network-metrics-daemon-lm6wc\" (UID: \"f7526a98-a284-45c2-aeb2-cce4ddcd8f45\") " pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:54:44.149142 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:44.148561 2572 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:44.149142 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:44.148636 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7526a98-a284-45c2-aeb2-cce4ddcd8f45-metrics-certs podName:f7526a98-a284-45c2-aeb2-cce4ddcd8f45 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:46.148616875 +0000 UTC m=+161.343521588 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f7526a98-a284-45c2-aeb2-cce4ddcd8f45-metrics-certs") pod "network-metrics-daemon-lm6wc" (UID: "f7526a98-a284-45c2-aeb2-cce4ddcd8f45") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:44.149142 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:44.149036 2572 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:54:44.149142 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:44.149086 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91e65909-6fc5-43ad-9403-4e762e15651f-original-pull-secret podName:91e65909-6fc5-43ad-9403-4e762e15651f nodeName:}" failed. No retries permitted until 2026-04-23 17:54:46.149071542 +0000 UTC m=+161.343976252 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/91e65909-6fc5-43ad-9403-4e762e15651f-original-pull-secret") pod "global-pull-secret-syncer-n95c8" (UID: "91e65909-6fc5-43ad-9403-4e762e15651f") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:54:44.249835 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:44.249796 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vl8gx\" (UniqueName: \"kubernetes.io/projected/4b5c0501-ab5e-4cac-9c9f-f306624ec47f-kube-api-access-vl8gx\") pod \"network-check-target-9wq98\" (UID: \"4b5c0501-ab5e-4cac-9c9f-f306624ec47f\") " pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:54:44.250028 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:44.249987 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:54:44.250028 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:44.250008 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:54:44.250028 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:44.250020 2572 projected.go:194] Error preparing data for projected volume kube-api-access-vl8gx for pod openshift-network-diagnostics/network-check-target-9wq98: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:44.250181 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:44.250101 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4b5c0501-ab5e-4cac-9c9f-f306624ec47f-kube-api-access-vl8gx podName:4b5c0501-ab5e-4cac-9c9f-f306624ec47f nodeName:}" failed. No retries permitted until 2026-04-23 17:54:46.250081791 +0000 UTC m=+161.444986500 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-vl8gx" (UniqueName: "kubernetes.io/projected/4b5c0501-ab5e-4cac-9c9f-f306624ec47f-kube-api-access-vl8gx") pod "network-check-target-9wq98" (UID: "4b5c0501-ab5e-4cac-9c9f-f306624ec47f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:44.414158 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:44.414078 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:54:44.414452 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:44.414218 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9wq98" podUID="4b5c0501-ab5e-4cac-9c9f-f306624ec47f" Apr 23 17:54:44.414757 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:44.414737 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:54:44.414908 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:44.414861 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-n95c8" podUID="91e65909-6fc5-43ad-9403-4e762e15651f" Apr 23 17:54:44.415020 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:44.415007 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:54:44.415136 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:44.415116 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lm6wc" podUID="f7526a98-a284-45c2-aeb2-cce4ddcd8f45" Apr 23 17:54:45.410520 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:45.410172 2572 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:46.165253 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:46.165026 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7526a98-a284-45c2-aeb2-cce4ddcd8f45-metrics-certs\") pod \"network-metrics-daemon-lm6wc\" (UID: \"f7526a98-a284-45c2-aeb2-cce4ddcd8f45\") " pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:54:46.165253 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:46.165119 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/91e65909-6fc5-43ad-9403-4e762e15651f-original-pull-secret\") pod \"global-pull-secret-syncer-n95c8\" (UID: \"91e65909-6fc5-43ad-9403-4e762e15651f\") " pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:54:46.165253 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:46.165175 2572 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:46.165253 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:46.165201 2572 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:54:46.165253 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:46.165239 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7526a98-a284-45c2-aeb2-cce4ddcd8f45-metrics-certs podName:f7526a98-a284-45c2-aeb2-cce4ddcd8f45 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:50.165220681 +0000 UTC m=+165.360125407 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f7526a98-a284-45c2-aeb2-cce4ddcd8f45-metrics-certs") pod "network-metrics-daemon-lm6wc" (UID: "f7526a98-a284-45c2-aeb2-cce4ddcd8f45") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:46.165253 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:46.165256 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91e65909-6fc5-43ad-9403-4e762e15651f-original-pull-secret podName:91e65909-6fc5-43ad-9403-4e762e15651f nodeName:}" failed. No retries permitted until 2026-04-23 17:54:50.165248501 +0000 UTC m=+165.360153206 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/91e65909-6fc5-43ad-9403-4e762e15651f-original-pull-secret") pod "global-pull-secret-syncer-n95c8" (UID: "91e65909-6fc5-43ad-9403-4e762e15651f") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:54:46.266780 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:46.266133 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vl8gx\" (UniqueName: \"kubernetes.io/projected/4b5c0501-ab5e-4cac-9c9f-f306624ec47f-kube-api-access-vl8gx\") pod \"network-check-target-9wq98\" (UID: \"4b5c0501-ab5e-4cac-9c9f-f306624ec47f\") " pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:54:46.266780 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:46.266295 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:54:46.266780 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:46.266313 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:54:46.266780 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:46.266344 2572 projected.go:194] Error preparing data for projected volume kube-api-access-vl8gx for pod openshift-network-diagnostics/network-check-target-9wq98: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:46.266780 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:46.266422 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4b5c0501-ab5e-4cac-9c9f-f306624ec47f-kube-api-access-vl8gx podName:4b5c0501-ab5e-4cac-9c9f-f306624ec47f nodeName:}" failed. No retries permitted until 2026-04-23 17:54:50.26638295 +0000 UTC m=+165.461287675 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-vl8gx" (UniqueName: "kubernetes.io/projected/4b5c0501-ab5e-4cac-9c9f-f306624ec47f-kube-api-access-vl8gx") pod "network-check-target-9wq98" (UID: "4b5c0501-ab5e-4cac-9c9f-f306624ec47f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:46.415080 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:46.414537 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:54:46.415080 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:46.414695 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-n95c8" podUID="91e65909-6fc5-43ad-9403-4e762e15651f" Apr 23 17:54:46.415659 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:46.415121 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:54:46.415659 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:46.415120 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:54:46.415659 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:46.415237 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lm6wc" podUID="f7526a98-a284-45c2-aeb2-cce4ddcd8f45" Apr 23 17:54:46.415659 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:46.415344 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9wq98" podUID="4b5c0501-ab5e-4cac-9c9f-f306624ec47f" Apr 23 17:54:48.414529 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:48.414495 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:54:48.414993 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:48.414614 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:54:48.414993 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:48.414633 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9wq98" podUID="4b5c0501-ab5e-4cac-9c9f-f306624ec47f" Apr 23 17:54:48.414993 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:48.414709 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lm6wc" podUID="f7526a98-a284-45c2-aeb2-cce4ddcd8f45" Apr 23 17:54:48.414993 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:48.414759 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:54:48.414993 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:48.414828 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-n95c8" podUID="91e65909-6fc5-43ad-9403-4e762e15651f" Apr 23 17:54:50.194848 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:50.194746 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7526a98-a284-45c2-aeb2-cce4ddcd8f45-metrics-certs\") pod \"network-metrics-daemon-lm6wc\" (UID: \"f7526a98-a284-45c2-aeb2-cce4ddcd8f45\") " pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:54:50.194848 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:50.194846 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/91e65909-6fc5-43ad-9403-4e762e15651f-original-pull-secret\") pod \"global-pull-secret-syncer-n95c8\" (UID: \"91e65909-6fc5-43ad-9403-4e762e15651f\") " pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:54:50.195328 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:50.194978 2572 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:54:50.195328 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:50.195042 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91e65909-6fc5-43ad-9403-4e762e15651f-original-pull-secret podName:91e65909-6fc5-43ad-9403-4e762e15651f nodeName:}" failed. No retries permitted until 2026-04-23 17:54:58.195022839 +0000 UTC m=+173.389927827 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/91e65909-6fc5-43ad-9403-4e762e15651f-original-pull-secret") pod "global-pull-secret-syncer-n95c8" (UID: "91e65909-6fc5-43ad-9403-4e762e15651f") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:54:50.195503 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:50.195485 2572 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:50.195564 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:50.195543 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7526a98-a284-45c2-aeb2-cce4ddcd8f45-metrics-certs podName:f7526a98-a284-45c2-aeb2-cce4ddcd8f45 nodeName:}" failed. No retries permitted until 2026-04-23 17:54:58.195525449 +0000 UTC m=+173.390430156 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f7526a98-a284-45c2-aeb2-cce4ddcd8f45-metrics-certs") pod "network-metrics-daemon-lm6wc" (UID: "f7526a98-a284-45c2-aeb2-cce4ddcd8f45") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:50.296215 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:50.296114 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vl8gx\" (UniqueName: \"kubernetes.io/projected/4b5c0501-ab5e-4cac-9c9f-f306624ec47f-kube-api-access-vl8gx\") pod \"network-check-target-9wq98\" (UID: \"4b5c0501-ab5e-4cac-9c9f-f306624ec47f\") " pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:54:50.296465 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:50.296300 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:54:50.296465 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:50.296329 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:54:50.296465 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:50.296342 2572 projected.go:194] Error preparing data for projected volume kube-api-access-vl8gx for pod openshift-network-diagnostics/network-check-target-9wq98: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:50.296465 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:50.296416 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4b5c0501-ab5e-4cac-9c9f-f306624ec47f-kube-api-access-vl8gx podName:4b5c0501-ab5e-4cac-9c9f-f306624ec47f nodeName:}" failed. No retries permitted until 2026-04-23 17:54:58.29638425 +0000 UTC m=+173.491288963 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-vl8gx" (UniqueName: "kubernetes.io/projected/4b5c0501-ab5e-4cac-9c9f-f306624ec47f-kube-api-access-vl8gx") pod "network-check-target-9wq98" (UID: "4b5c0501-ab5e-4cac-9c9f-f306624ec47f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:50.411139 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:50.411084 2572 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:50.413755 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:50.413725 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:54:50.413894 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:50.413761 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:54:50.413894 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:50.413725 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:54:50.413894 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:50.413842 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9wq98" podUID="4b5c0501-ab5e-4cac-9c9f-f306624ec47f" Apr 23 17:54:50.414047 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:50.413930 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lm6wc" podUID="f7526a98-a284-45c2-aeb2-cce4ddcd8f45" Apr 23 17:54:50.414047 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:50.414015 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-n95c8" podUID="91e65909-6fc5-43ad-9403-4e762e15651f" Apr 23 17:54:52.414163 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:52.414120 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:54:52.414632 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:52.414183 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:54:52.414632 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:52.414142 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:54:52.414632 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:52.414258 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-n95c8" podUID="91e65909-6fc5-43ad-9403-4e762e15651f" Apr 23 17:54:52.414632 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:52.414377 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9wq98" podUID="4b5c0501-ab5e-4cac-9c9f-f306624ec47f" Apr 23 17:54:52.414632 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:52.414523 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lm6wc" podUID="f7526a98-a284-45c2-aeb2-cce4ddcd8f45" Apr 23 17:54:54.413908 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:54.413871 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:54:54.413908 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:54.413879 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:54:54.414448 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:54.414104 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-n95c8" podUID="91e65909-6fc5-43ad-9403-4e762e15651f" Apr 23 17:54:54.414448 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:54.414119 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:54:54.414448 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:54.414189 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lm6wc" podUID="f7526a98-a284-45c2-aeb2-cce4ddcd8f45" Apr 23 17:54:54.414448 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:54.414224 2572 scope.go:117] "RemoveContainer" containerID="be862fbb1a27d116f989acff7cdaaa399cb017863e17162d45233d74a9f7b96c" Apr 23 17:54:54.414448 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:54.414255 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9wq98" podUID="4b5c0501-ab5e-4cac-9c9f-f306624ec47f" Apr 23 17:54:54.414448 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:54.414391 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_openshift-machine-config-operator(2e27a1d033408744b4b8c34c52f01b43)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal" podUID="2e27a1d033408744b4b8c34c52f01b43" Apr 23 17:54:55.411520 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:55.411483 2572 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:54:56.414687 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:56.414649 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:54:56.415149 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:56.414649 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:54:56.415149 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:56.414799 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lm6wc" podUID="f7526a98-a284-45c2-aeb2-cce4ddcd8f45" Apr 23 17:54:56.415149 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:56.414649 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:54:56.415149 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:56.414873 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9wq98" podUID="4b5c0501-ab5e-4cac-9c9f-f306624ec47f" Apr 23 17:54:56.415149 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:56.414951 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-n95c8" podUID="91e65909-6fc5-43ad-9403-4e762e15651f" Apr 23 17:54:58.257101 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:58.257068 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/91e65909-6fc5-43ad-9403-4e762e15651f-original-pull-secret\") pod \"global-pull-secret-syncer-n95c8\" (UID: \"91e65909-6fc5-43ad-9403-4e762e15651f\") " pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:54:58.257547 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:58.257128 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7526a98-a284-45c2-aeb2-cce4ddcd8f45-metrics-certs\") pod \"network-metrics-daemon-lm6wc\" (UID: \"f7526a98-a284-45c2-aeb2-cce4ddcd8f45\") " pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:54:58.257547 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:58.257249 2572 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:54:58.257547 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:58.257322 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91e65909-6fc5-43ad-9403-4e762e15651f-original-pull-secret podName:91e65909-6fc5-43ad-9403-4e762e15651f nodeName:}" failed. No retries permitted until 2026-04-23 17:55:14.257303821 +0000 UTC m=+189.452208529 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/91e65909-6fc5-43ad-9403-4e762e15651f-original-pull-secret") pod "global-pull-secret-syncer-n95c8" (UID: "91e65909-6fc5-43ad-9403-4e762e15651f") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:54:58.257547 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:58.257249 2572 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:58.257547 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:58.257434 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7526a98-a284-45c2-aeb2-cce4ddcd8f45-metrics-certs podName:f7526a98-a284-45c2-aeb2-cce4ddcd8f45 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:14.25741196 +0000 UTC m=+189.452316681 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f7526a98-a284-45c2-aeb2-cce4ddcd8f45-metrics-certs") pod "network-metrics-daemon-lm6wc" (UID: "f7526a98-a284-45c2-aeb2-cce4ddcd8f45") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:54:58.357480 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:58.357439 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vl8gx\" (UniqueName: \"kubernetes.io/projected/4b5c0501-ab5e-4cac-9c9f-f306624ec47f-kube-api-access-vl8gx\") pod \"network-check-target-9wq98\" (UID: \"4b5c0501-ab5e-4cac-9c9f-f306624ec47f\") " pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:54:58.357633 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:58.357601 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:54:58.357633 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:58.357631 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:54:58.357741 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:58.357647 2572 projected.go:194] Error preparing data for projected volume kube-api-access-vl8gx for pod openshift-network-diagnostics/network-check-target-9wq98: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:58.357741 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:58.357714 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4b5c0501-ab5e-4cac-9c9f-f306624ec47f-kube-api-access-vl8gx podName:4b5c0501-ab5e-4cac-9c9f-f306624ec47f nodeName:}" failed. No retries permitted until 2026-04-23 17:55:14.357692656 +0000 UTC m=+189.552597369 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-vl8gx" (UniqueName: "kubernetes.io/projected/4b5c0501-ab5e-4cac-9c9f-f306624ec47f-kube-api-access-vl8gx") pod "network-check-target-9wq98" (UID: "4b5c0501-ab5e-4cac-9c9f-f306624ec47f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:54:58.414044 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:58.414010 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:54:58.414220 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:58.414017 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:54:58.414220 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:58.414137 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9wq98" podUID="4b5c0501-ab5e-4cac-9c9f-f306624ec47f" Apr 23 17:54:58.414220 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:58.414030 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:54:58.414370 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:58.414190 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lm6wc" podUID="f7526a98-a284-45c2-aeb2-cce4ddcd8f45" Apr 23 17:54:58.414370 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:54:58.414285 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-n95c8" podUID="91e65909-6fc5-43ad-9403-4e762e15651f" Apr 23 17:54:59.750651 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:59.750439 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f9ndr" event={"ID":"012f7036-9d2e-45a6-985c-701982b85f46","Type":"ContainerStarted","Data":"c38016ffe7a0ec7de1c886076d07d70c4bffc271f85c7e5c8e0b1efe780c8901"} Apr 23 17:54:59.751575 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:59.751553 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6brjb" event={"ID":"30e5a914-97b2-4c21-985a-db4f9913ea08","Type":"ContainerStarted","Data":"26ca5660b9fc13bb402c74b38d7ec357cd6cc2f42ea55a626cf074d2d9a58999"} Apr 23 17:54:59.752715 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:59.752694 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-h9g78" event={"ID":"c4c9afb7-fbe4-44de-b1b1-c6a1f86b72dd","Type":"ContainerStarted","Data":"3fd5f06e9bfd119de96f1d803b45505046762388907b80bf5a88b669497da01c"} Apr 23 17:54:59.753863 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:59.753840 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hxdgk" event={"ID":"ae6f204b-0425-4e4c-8749-41bce4ec27bd","Type":"ContainerStarted","Data":"ebcd64035e934d224493227aa4f5adfdf48f57dc554a6f182fb885867f21b210"} Apr 23 17:54:59.755271 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:59.755244 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" event={"ID":"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5","Type":"ContainerStarted","Data":"c0790646b78976a330df9d63712ce3106dc354f7baaeb8012ac616e42b102c3f"} Apr 23 17:54:59.755356 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:59.755280 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" event={"ID":"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5","Type":"ContainerStarted","Data":"424bbc042b2c1425d393a65711e3883744890b413c16c550f2bd75158afab962"} Apr 23 17:54:59.757106 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:59.756674 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-9455k" event={"ID":"cc1881ec-f1a3-4551-ac37-e01f270956dc","Type":"ContainerStarted","Data":"0dc6072a08a5e0fe5f5531b3f74723814297dc64907272b70e501347b48d462c"} Apr 23 17:54:59.758703 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:59.758683 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-cwppv" event={"ID":"26df88a3-a37a-4023-9f9f-cce91875523b","Type":"ContainerStarted","Data":"b1d7699200b8a26d4542a41ccae3328cbc364e84866373b4360f3b9494c31a14"} Apr 23 17:54:59.759969 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:59.759949 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9hqf9" event={"ID":"c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6","Type":"ContainerStarted","Data":"f612695ab6d44ccea8ff9a4ccee1a6e28ffe568c4ac108986b37608bad7c8db8"} Apr 23 17:54:59.831612 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:59.831559 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-9455k" podStartSLOduration=33.170948157 podStartE2EDuration="49.831539896s" podCreationTimestamp="2026-04-23 17:54:10 +0000 UTC" firstStartedPulling="2026-04-23 17:54:42.725914871 +0000 UTC m=+157.920819576" lastFinishedPulling="2026-04-23 17:54:59.386506604 +0000 UTC m=+174.581411315" observedRunningTime="2026-04-23 17:54:59.830855961 +0000 UTC m=+175.025760690" watchObservedRunningTime="2026-04-23 17:54:59.831539896 +0000 UTC m=+175.026444639" Apr 23 17:54:59.905598 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:59.905429 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-hxdgk" podStartSLOduration=33.229953838 podStartE2EDuration="49.905414877s" podCreationTimestamp="2026-04-23 17:54:10 +0000 UTC" firstStartedPulling="2026-04-23 17:54:42.71107141 +0000 UTC m=+157.905976116" lastFinishedPulling="2026-04-23 17:54:59.386532435 +0000 UTC m=+174.581437155" observedRunningTime="2026-04-23 17:54:59.853518377 +0000 UTC m=+175.048423126" watchObservedRunningTime="2026-04-23 17:54:59.905414877 +0000 UTC m=+175.100319598" Apr 23 17:54:59.939876 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:59.939832 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/konnectivity-agent-cwppv" podStartSLOduration=37.837882688 podStartE2EDuration="49.939816475s" podCreationTimestamp="2026-04-23 17:54:10 +0000 UTC" firstStartedPulling="2026-04-23 17:54:42.716004719 +0000 UTC m=+157.910909424" lastFinishedPulling="2026-04-23 17:54:54.817938502 +0000 UTC m=+170.012843211" observedRunningTime="2026-04-23 17:54:59.911393052 +0000 UTC m=+175.106297778" watchObservedRunningTime="2026-04-23 17:54:59.939816475 +0000 UTC m=+175.134721202" Apr 23 17:54:59.965352 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:59.965306 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-h9g78" podStartSLOduration=33.309907475 podStartE2EDuration="49.965285899s" podCreationTimestamp="2026-04-23 17:54:10 +0000 UTC" firstStartedPulling="2026-04-23 17:54:42.731229656 +0000 UTC m=+157.926134365" lastFinishedPulling="2026-04-23 17:54:59.38660807 +0000 UTC m=+174.581512789" observedRunningTime="2026-04-23 17:54:59.940049351 +0000 UTC m=+175.134954080" watchObservedRunningTime="2026-04-23 17:54:59.965285899 +0000 UTC m=+175.160190633" Apr 23 17:54:59.965529 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:54:59.965395 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-6brjb" podStartSLOduration=33.266820874 podStartE2EDuration="49.965387837s" podCreationTimestamp="2026-04-23 17:54:10 +0000 UTC" firstStartedPulling="2026-04-23 17:54:42.737629055 +0000 UTC m=+157.932533761" lastFinishedPulling="2026-04-23 17:54:59.436196017 +0000 UTC m=+174.631100724" observedRunningTime="2026-04-23 17:54:59.964795221 +0000 UTC m=+175.159699949" watchObservedRunningTime="2026-04-23 17:54:59.965387837 +0000 UTC m=+175.160292566" Apr 23 17:55:00.412309 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:00.412278 2572 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:55:00.414594 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:00.414576 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:55:00.414702 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:00.414684 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:55:00.414702 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:00.414691 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9wq98" podUID="4b5c0501-ab5e-4cac-9c9f-f306624ec47f" Apr 23 17:55:00.414829 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:00.414809 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lm6wc" podUID="f7526a98-a284-45c2-aeb2-cce4ddcd8f45" Apr 23 17:55:00.414888 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:00.414860 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:55:00.414968 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:00.414951 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-n95c8" podUID="91e65909-6fc5-43ad-9403-4e762e15651f" Apr 23 17:55:00.450363 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:00.450338 2572 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock" Apr 23 17:55:00.482473 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:00.482370 2572 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock","Timestamp":"2026-04-23T17:55:00.450360013Z","UUID":"3f8632f9-c855-4b62-9b92-be628dc5c5a5","Handler":null,"Name":"","Endpoint":""} Apr 23 17:55:00.483804 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:00.483788 2572 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: ebs.csi.aws.com endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock versions: 1.0.0 Apr 23 17:55:00.483903 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:00.483812 2572 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: ebs.csi.aws.com at endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock Apr 23 17:55:00.762225 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:00.762192 2572 generic.go:358] "Generic (PLEG): container finished" podID="012f7036-9d2e-45a6-985c-701982b85f46" containerID="c38016ffe7a0ec7de1c886076d07d70c4bffc271f85c7e5c8e0b1efe780c8901" exitCode=0 Apr 23 17:55:00.762773 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:00.762286 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f9ndr" event={"ID":"012f7036-9d2e-45a6-985c-701982b85f46","Type":"ContainerDied","Data":"c38016ffe7a0ec7de1c886076d07d70c4bffc271f85c7e5c8e0b1efe780c8901"} Apr 23 17:55:00.764632 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:00.764615 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9pcc_3c2da17f-0591-4850-9fa2-fde2a8c1a8d5/ovn-acl-logging/0.log" Apr 23 17:55:00.764904 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:00.764886 2572 generic.go:358] "Generic (PLEG): container finished" podID="3c2da17f-0591-4850-9fa2-fde2a8c1a8d5" containerID="c0790646b78976a330df9d63712ce3106dc354f7baaeb8012ac616e42b102c3f" exitCode=1 Apr 23 17:55:00.764965 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:00.764943 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" event={"ID":"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5","Type":"ContainerDied","Data":"c0790646b78976a330df9d63712ce3106dc354f7baaeb8012ac616e42b102c3f"} Apr 23 17:55:00.764965 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:00.764960 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" event={"ID":"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5","Type":"ContainerStarted","Data":"250990afb13826f9e0c2d18c19bb069826dbd071f2ea1428a1059eafdfc3e01f"} Apr 23 17:55:00.765207 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:00.764970 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" event={"ID":"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5","Type":"ContainerStarted","Data":"cee54bb065569bdfb777f11ef5d997056efdb6085997e1b2a97451788f435a4a"} Apr 23 17:55:00.765207 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:00.764982 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" event={"ID":"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5","Type":"ContainerStarted","Data":"f00e341784ece7d3ec979e9c954a45a8fc7dda5617a6e53d2737c2ac2203f25c"} Apr 23 17:55:00.765207 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:00.764994 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" event={"ID":"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5","Type":"ContainerStarted","Data":"e538073f8fc1abb9105878c1fed6a5c35ea5e22bf5cfb159638f99260744210d"} Apr 23 17:55:00.766143 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:00.766124 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4wmwg" event={"ID":"095aaf33-9f06-4dd6-ab66-144f189b570f","Type":"ContainerStarted","Data":"ae74488fa06ec10e427cf975c78bf0a03a6721f5ee619b2e1b4be89addd1fdf7"} Apr 23 17:55:00.767457 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:00.767439 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9hqf9" event={"ID":"c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6","Type":"ContainerStarted","Data":"d36ffab1e5f2d031f385c711b4e916d6ec5ccd3b5eaad0e4ddfe243765ee6044"} Apr 23 17:55:01.771049 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:01.771007 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9hqf9" event={"ID":"c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6","Type":"ContainerStarted","Data":"e13b294021779135dc42bcfd7bd08818d9aeefe990c76a35c2e3b0a56507601a"} Apr 23 17:55:01.800296 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:01.800234 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-9hqf9" podStartSLOduration=33.28542379 podStartE2EDuration="51.800216962s" podCreationTimestamp="2026-04-23 17:54:10 +0000 UTC" firstStartedPulling="2026-04-23 17:54:42.704629346 +0000 UTC m=+157.899534051" lastFinishedPulling="2026-04-23 17:55:01.219422515 +0000 UTC m=+176.414327223" observedRunningTime="2026-04-23 17:55:01.799837684 +0000 UTC m=+176.994742412" watchObservedRunningTime="2026-04-23 17:55:01.800216962 +0000 UTC m=+176.995121686" Apr 23 17:55:01.800503 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:01.800477 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/iptables-alerter-4wmwg" podStartSLOduration=35.134880934 podStartE2EDuration="51.800471441s" podCreationTimestamp="2026-04-23 17:54:10 +0000 UTC" firstStartedPulling="2026-04-23 17:54:42.720959953 +0000 UTC m=+157.915864661" lastFinishedPulling="2026-04-23 17:54:59.386550462 +0000 UTC m=+174.581455168" observedRunningTime="2026-04-23 17:55:00.886800247 +0000 UTC m=+176.081704973" watchObservedRunningTime="2026-04-23 17:55:01.800471441 +0000 UTC m=+176.995376168" Apr 23 17:55:02.414162 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:02.414079 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:55:02.414162 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:02.414133 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:55:02.414376 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:02.414209 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lm6wc" podUID="f7526a98-a284-45c2-aeb2-cce4ddcd8f45" Apr 23 17:55:02.414376 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:02.414077 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:55:02.414376 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:02.414318 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9wq98" podUID="4b5c0501-ab5e-4cac-9c9f-f306624ec47f" Apr 23 17:55:02.414554 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:02.414507 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-n95c8" podUID="91e65909-6fc5-43ad-9403-4e762e15651f" Apr 23 17:55:02.707589 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:02.707516 2572 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/konnectivity-agent-cwppv" Apr 23 17:55:02.708198 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:02.708174 2572 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/konnectivity-agent-cwppv" Apr 23 17:55:02.776329 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:02.776300 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9pcc_3c2da17f-0591-4850-9fa2-fde2a8c1a8d5/ovn-acl-logging/0.log" Apr 23 17:55:02.776758 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:02.776713 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" event={"ID":"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5","Type":"ContainerStarted","Data":"56a13a88f292171bb5d870587be3946b002a1015c29793f871b28da27365c0f9"} Apr 23 17:55:02.777103 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:02.777080 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kube-system/konnectivity-agent-cwppv" Apr 23 17:55:02.777625 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:02.777605 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/konnectivity-agent-cwppv" Apr 23 17:55:03.707866 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:03.707835 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-hxdgk_ae6f204b-0425-4e4c-8749-41bce4ec27bd/dns-node-resolver/0.log" Apr 23 17:55:04.414042 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:04.413958 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:55:04.414545 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:04.414102 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:55:04.414545 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:04.414121 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9wq98" podUID="4b5c0501-ab5e-4cac-9c9f-f306624ec47f" Apr 23 17:55:04.414545 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:04.414198 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lm6wc" podUID="f7526a98-a284-45c2-aeb2-cce4ddcd8f45" Apr 23 17:55:04.414545 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:04.414241 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:55:04.414545 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:04.414315 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-n95c8" podUID="91e65909-6fc5-43ad-9403-4e762e15651f" Apr 23 17:55:04.681510 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:04.681350 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-h9g78_c4c9afb7-fbe4-44de-b1b1-c6a1f86b72dd/node-ca/0.log" Apr 23 17:55:04.783311 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:04.783111 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9pcc_3c2da17f-0591-4850-9fa2-fde2a8c1a8d5/ovn-acl-logging/0.log" Apr 23 17:55:04.783789 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:04.783499 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" event={"ID":"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5","Type":"ContainerStarted","Data":"99f2470edb04da64bb91ea602bd5ff95b2d0457e4486c1394659435143f108f8"} Apr 23 17:55:04.783789 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:04.783765 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:55:04.784002 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:04.783797 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:55:04.784002 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:04.783812 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:55:04.784002 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:04.783932 2572 scope.go:117] "RemoveContainer" containerID="c0790646b78976a330df9d63712ce3106dc354f7baaeb8012ac616e42b102c3f" Apr 23 17:55:04.799346 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:04.799111 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:55:04.799346 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:04.799218 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:55:05.413184 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:05.413144 2572 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:55:05.415357 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:05.415340 2572 scope.go:117] "RemoveContainer" containerID="be862fbb1a27d116f989acff7cdaaa399cb017863e17162d45233d74a9f7b96c" Apr 23 17:55:05.415727 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:05.415517 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_openshift-machine-config-operator(2e27a1d033408744b4b8c34c52f01b43)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal" podUID="2e27a1d033408744b4b8c34c52f01b43" Apr 23 17:55:05.788180 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:05.788152 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9pcc_3c2da17f-0591-4850-9fa2-fde2a8c1a8d5/ovn-acl-logging/0.log" Apr 23 17:55:05.788530 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:05.788505 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" event={"ID":"3c2da17f-0591-4850-9fa2-fde2a8c1a8d5","Type":"ContainerStarted","Data":"21fc0a74609ba7a222cef106b79dd4f84c37d34f07b588c4e5465b21745dafd8"} Apr 23 17:55:05.789969 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:05.789945 2572 generic.go:358] "Generic (PLEG): container finished" podID="012f7036-9d2e-45a6-985c-701982b85f46" containerID="dd6a23fbdd0c5eb99d436a8ae4af8ffac1c22190c4e7f2952abef4fb7f24f6b9" exitCode=0 Apr 23 17:55:05.790066 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:05.789978 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f9ndr" event={"ID":"012f7036-9d2e-45a6-985c-701982b85f46","Type":"ContainerDied","Data":"dd6a23fbdd0c5eb99d436a8ae4af8ffac1c22190c4e7f2952abef4fb7f24f6b9"} Apr 23 17:55:05.840221 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:05.838421 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" podStartSLOduration=39.129044507 podStartE2EDuration="55.838388009s" podCreationTimestamp="2026-04-23 17:54:10 +0000 UTC" firstStartedPulling="2026-04-23 17:54:42.747906877 +0000 UTC m=+157.942811582" lastFinishedPulling="2026-04-23 17:54:59.457250373 +0000 UTC m=+174.652155084" observedRunningTime="2026-04-23 17:55:05.832975849 +0000 UTC m=+181.027880570" watchObservedRunningTime="2026-04-23 17:55:05.838388009 +0000 UTC m=+181.033292737" Apr 23 17:55:06.282524 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:06.282499 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-csi-drivers_aws-ebs-csi-driver-node-9hqf9_c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6/csi-driver/0.log" Apr 23 17:55:06.414605 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:06.414568 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:55:06.414809 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:06.414568 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:55:06.414809 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:06.414671 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9wq98" podUID="4b5c0501-ab5e-4cac-9c9f-f306624ec47f" Apr 23 17:55:06.414809 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:06.414568 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:55:06.414809 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:06.414748 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-n95c8" podUID="91e65909-6fc5-43ad-9403-4e762e15651f" Apr 23 17:55:06.414959 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:06.414817 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lm6wc" podUID="f7526a98-a284-45c2-aeb2-cce4ddcd8f45" Apr 23 17:55:06.488870 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:06.488832 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-csi-drivers_aws-ebs-csi-driver-node-9hqf9_c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6/csi-node-driver-registrar/0.log" Apr 23 17:55:06.682021 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:06.681949 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-csi-drivers_aws-ebs-csi-driver-node-9hqf9_c2030a4c-0080-4f20-acdf-ad7bf7a7f5c6/csi-liveness-probe/0.log" Apr 23 17:55:07.795225 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:07.795187 2572 generic.go:358] "Generic (PLEG): container finished" podID="012f7036-9d2e-45a6-985c-701982b85f46" containerID="7a0a6eaae5a9fc079da1d0bb35c6110c46da8ecba75b62be401ba5a9e0d2d2f1" exitCode=0 Apr 23 17:55:07.795709 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:07.795260 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f9ndr" event={"ID":"012f7036-9d2e-45a6-985c-701982b85f46","Type":"ContainerDied","Data":"7a0a6eaae5a9fc079da1d0bb35c6110c46da8ecba75b62be401ba5a9e0d2d2f1"} Apr 23 17:55:08.414318 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:08.414276 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:55:08.414501 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:08.414276 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:55:08.414501 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:08.414426 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lm6wc" podUID="f7526a98-a284-45c2-aeb2-cce4ddcd8f45" Apr 23 17:55:08.414501 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:08.414294 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:55:08.414501 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:08.414477 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-n95c8" podUID="91e65909-6fc5-43ad-9403-4e762e15651f" Apr 23 17:55:08.414664 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:08.414559 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9wq98" podUID="4b5c0501-ab5e-4cac-9c9f-f306624ec47f" Apr 23 17:55:09.800908 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:09.800866 2572 generic.go:358] "Generic (PLEG): container finished" podID="012f7036-9d2e-45a6-985c-701982b85f46" containerID="06f5d69b4f2b0c0c6fa1d7a637645fdee8eb6d7d7c4f244813d117d6a7abd410" exitCode=0 Apr 23 17:55:09.800908 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:09.800910 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f9ndr" event={"ID":"012f7036-9d2e-45a6-985c-701982b85f46","Type":"ContainerDied","Data":"06f5d69b4f2b0c0c6fa1d7a637645fdee8eb6d7d7c4f244813d117d6a7abd410"} Apr 23 17:55:10.414328 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:10.414286 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:55:10.414535 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:10.414421 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:55:10.414535 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:10.414431 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-n95c8" podUID="91e65909-6fc5-43ad-9403-4e762e15651f" Apr 23 17:55:10.414535 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:10.414463 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:55:10.414693 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:10.414550 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lm6wc" podUID="f7526a98-a284-45c2-aeb2-cce4ddcd8f45" Apr 23 17:55:10.414693 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:10.414602 2572 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:55:10.414693 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:10.414664 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9wq98" podUID="4b5c0501-ab5e-4cac-9c9f-f306624ec47f" Apr 23 17:55:12.413677 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:12.413632 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:55:12.414263 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:12.413632 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:55:12.414263 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:12.413787 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-n95c8" podUID="91e65909-6fc5-43ad-9403-4e762e15651f" Apr 23 17:55:12.414263 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:12.413633 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:55:12.414263 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:12.413856 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9wq98" podUID="4b5c0501-ab5e-4cac-9c9f-f306624ec47f" Apr 23 17:55:12.414263 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:12.413967 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lm6wc" podUID="f7526a98-a284-45c2-aeb2-cce4ddcd8f45" Apr 23 17:55:14.274006 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:14.273957 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/91e65909-6fc5-43ad-9403-4e762e15651f-original-pull-secret\") pod \"global-pull-secret-syncer-n95c8\" (UID: \"91e65909-6fc5-43ad-9403-4e762e15651f\") " pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:55:14.274442 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:14.274039 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7526a98-a284-45c2-aeb2-cce4ddcd8f45-metrics-certs\") pod \"network-metrics-daemon-lm6wc\" (UID: \"f7526a98-a284-45c2-aeb2-cce4ddcd8f45\") " pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:55:14.274442 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:14.274134 2572 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:14.274442 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:14.274214 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7526a98-a284-45c2-aeb2-cce4ddcd8f45-metrics-certs podName:f7526a98-a284-45c2-aeb2-cce4ddcd8f45 nodeName:}" failed. No retries permitted until 2026-04-23 17:55:46.274195081 +0000 UTC m=+221.469099803 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f7526a98-a284-45c2-aeb2-cce4ddcd8f45-metrics-certs") pod "network-metrics-daemon-lm6wc" (UID: "f7526a98-a284-45c2-aeb2-cce4ddcd8f45") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:14.274442 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:14.274134 2572 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:55:14.274442 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:14.274309 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91e65909-6fc5-43ad-9403-4e762e15651f-original-pull-secret podName:91e65909-6fc5-43ad-9403-4e762e15651f nodeName:}" failed. No retries permitted until 2026-04-23 17:55:46.27428077 +0000 UTC m=+221.469185486 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/91e65909-6fc5-43ad-9403-4e762e15651f-original-pull-secret") pod "global-pull-secret-syncer-n95c8" (UID: "91e65909-6fc5-43ad-9403-4e762e15651f") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:55:14.374433 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:14.374380 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vl8gx\" (UniqueName: \"kubernetes.io/projected/4b5c0501-ab5e-4cac-9c9f-f306624ec47f-kube-api-access-vl8gx\") pod \"network-check-target-9wq98\" (UID: \"4b5c0501-ab5e-4cac-9c9f-f306624ec47f\") " pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:55:14.374617 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:14.374565 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:55:14.374617 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:14.374592 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:55:14.374617 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:14.374607 2572 projected.go:194] Error preparing data for projected volume kube-api-access-vl8gx for pod openshift-network-diagnostics/network-check-target-9wq98: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:14.374748 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:14.374673 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4b5c0501-ab5e-4cac-9c9f-f306624ec47f-kube-api-access-vl8gx podName:4b5c0501-ab5e-4cac-9c9f-f306624ec47f nodeName:}" failed. No retries permitted until 2026-04-23 17:55:46.374654412 +0000 UTC m=+221.569559134 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-vl8gx" (UniqueName: "kubernetes.io/projected/4b5c0501-ab5e-4cac-9c9f-f306624ec47f-kube-api-access-vl8gx") pod "network-check-target-9wq98" (UID: "4b5c0501-ab5e-4cac-9c9f-f306624ec47f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:14.414487 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:14.414454 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:55:14.414652 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:14.414455 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:55:14.414652 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:14.414579 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lm6wc" podUID="f7526a98-a284-45c2-aeb2-cce4ddcd8f45" Apr 23 17:55:14.414759 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:14.414455 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:55:14.414759 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:14.414655 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9wq98" podUID="4b5c0501-ab5e-4cac-9c9f-f306624ec47f" Apr 23 17:55:14.414759 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:14.414746 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-n95c8" podUID="91e65909-6fc5-43ad-9403-4e762e15651f" Apr 23 17:55:15.415023 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:15.414985 2572 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:55:15.817888 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:15.817853 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f9ndr" event={"ID":"012f7036-9d2e-45a6-985c-701982b85f46","Type":"ContainerStarted","Data":"45b917e464653764c7b37a958890e5845e059a2ef0852f9844a225748ea80dce"} Apr 23 17:55:16.413922 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:16.413884 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:55:16.414099 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:16.413886 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:55:16.414099 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:16.413987 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9wq98" podUID="4b5c0501-ab5e-4cac-9c9f-f306624ec47f" Apr 23 17:55:16.414099 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:16.414078 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lm6wc" podUID="f7526a98-a284-45c2-aeb2-cce4ddcd8f45" Apr 23 17:55:16.414099 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:16.413886 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:55:16.414253 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:16.414156 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-n95c8" podUID="91e65909-6fc5-43ad-9403-4e762e15651f" Apr 23 17:55:16.821655 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:16.821620 2572 generic.go:358] "Generic (PLEG): container finished" podID="012f7036-9d2e-45a6-985c-701982b85f46" containerID="45b917e464653764c7b37a958890e5845e059a2ef0852f9844a225748ea80dce" exitCode=0 Apr 23 17:55:16.822118 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:16.821675 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f9ndr" event={"ID":"012f7036-9d2e-45a6-985c-701982b85f46","Type":"ContainerDied","Data":"45b917e464653764c7b37a958890e5845e059a2ef0852f9844a225748ea80dce"} Apr 23 17:55:17.414734 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:17.414702 2572 scope.go:117] "RemoveContainer" containerID="be862fbb1a27d116f989acff7cdaaa399cb017863e17162d45233d74a9f7b96c" Apr 23 17:55:17.826373 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:17.826339 2572 generic.go:358] "Generic (PLEG): container finished" podID="012f7036-9d2e-45a6-985c-701982b85f46" containerID="3c2c1ce78f728e2bb098f978490e2d3800b20c23bdc9d692e6cbd0ee63231a79" exitCode=0 Apr 23 17:55:17.827125 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:17.826421 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f9ndr" event={"ID":"012f7036-9d2e-45a6-985c-701982b85f46","Type":"ContainerDied","Data":"3c2c1ce78f728e2bb098f978490e2d3800b20c23bdc9d692e6cbd0ee63231a79"} Apr 23 17:55:17.828073 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:17.828053 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_2e27a1d033408744b4b8c34c52f01b43/kube-rbac-proxy-crio/4.log" Apr 23 17:55:17.828418 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:17.828380 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal" event={"ID":"2e27a1d033408744b4b8c34c52f01b43","Type":"ContainerStarted","Data":"4cdc93328495d2b65f6a6e2f5607f95f8eda00aeb6b91b897cdf9f29f1bb0c7f"} Apr 23 17:55:17.874754 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:17.874705 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal" podStartSLOduration=35.874688269 podStartE2EDuration="35.874688269s" podCreationTimestamp="2026-04-23 17:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:55:17.874228438 +0000 UTC m=+193.069133165" watchObservedRunningTime="2026-04-23 17:55:17.874688269 +0000 UTC m=+193.069592997" Apr 23 17:55:18.414036 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:18.413861 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:55:18.414212 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:18.413920 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:55:18.414212 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:18.414120 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-n95c8" podUID="91e65909-6fc5-43ad-9403-4e762e15651f" Apr 23 17:55:18.414212 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:18.414180 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9wq98" podUID="4b5c0501-ab5e-4cac-9c9f-f306624ec47f" Apr 23 17:55:18.414212 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:18.413941 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:55:18.414338 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:18.414260 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lm6wc" podUID="f7526a98-a284-45c2-aeb2-cce4ddcd8f45" Apr 23 17:55:18.833434 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:18.833387 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f9ndr" event={"ID":"012f7036-9d2e-45a6-985c-701982b85f46","Type":"ContainerStarted","Data":"f066a1d4efd97ff3b05fd4c1cc80280e221583a85c342c99d29b999db1c5e1b6"} Apr 23 17:55:20.413788 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:20.413751 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:55:20.414247 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:20.413852 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9wq98" podUID="4b5c0501-ab5e-4cac-9c9f-f306624ec47f" Apr 23 17:55:20.414247 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:20.413867 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:55:20.414247 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:20.413881 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:55:20.414247 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:20.413951 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-n95c8" podUID="91e65909-6fc5-43ad-9403-4e762e15651f" Apr 23 17:55:20.414247 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:20.414018 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lm6wc" podUID="f7526a98-a284-45c2-aeb2-cce4ddcd8f45" Apr 23 17:55:20.416196 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:20.416175 2572 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:55:22.414052 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:22.414020 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:55:22.414456 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:22.414027 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:55:22.414456 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:22.414117 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9wq98" podUID="4b5c0501-ab5e-4cac-9c9f-f306624ec47f" Apr 23 17:55:22.414456 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:22.414196 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-n95c8" podUID="91e65909-6fc5-43ad-9403-4e762e15651f" Apr 23 17:55:22.414456 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:22.414027 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:55:22.414456 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:22.414282 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lm6wc" podUID="f7526a98-a284-45c2-aeb2-cce4ddcd8f45" Apr 23 17:55:24.413746 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:24.413704 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:55:24.414131 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:24.413704 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:55:24.414131 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:24.413820 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lm6wc" podUID="f7526a98-a284-45c2-aeb2-cce4ddcd8f45" Apr 23 17:55:24.414131 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:24.413704 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:55:24.414131 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:24.413891 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9wq98" podUID="4b5c0501-ab5e-4cac-9c9f-f306624ec47f" Apr 23 17:55:24.414131 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:24.413958 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-n95c8" podUID="91e65909-6fc5-43ad-9403-4e762e15651f" Apr 23 17:55:25.416739 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:25.416695 2572 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:55:26.413919 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:26.413876 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:55:26.413919 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:26.413911 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:55:26.414139 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:26.413947 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:55:26.414139 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:26.414043 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lm6wc" podUID="f7526a98-a284-45c2-aeb2-cce4ddcd8f45" Apr 23 17:55:26.414226 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:26.414156 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9wq98" podUID="4b5c0501-ab5e-4cac-9c9f-f306624ec47f" Apr 23 17:55:26.414346 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:26.414318 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-n95c8" podUID="91e65909-6fc5-43ad-9403-4e762e15651f" Apr 23 17:55:28.413841 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:28.413799 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:55:28.414278 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:28.413802 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:55:28.414278 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:28.413915 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9wq98" podUID="4b5c0501-ab5e-4cac-9c9f-f306624ec47f" Apr 23 17:55:28.414278 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:28.413802 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:55:28.414278 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:28.413990 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lm6wc" podUID="f7526a98-a284-45c2-aeb2-cce4ddcd8f45" Apr 23 17:55:28.414278 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:28.414087 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-n95c8" podUID="91e65909-6fc5-43ad-9403-4e762e15651f" Apr 23 17:55:30.414620 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:30.414585 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:55:30.414980 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:30.414627 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:55:30.414980 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:30.414680 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9wq98" podUID="4b5c0501-ab5e-4cac-9c9f-f306624ec47f" Apr 23 17:55:30.414980 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:30.414717 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-n95c8" podUID="91e65909-6fc5-43ad-9403-4e762e15651f" Apr 23 17:55:30.414980 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:30.414757 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:55:30.414980 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:30.414812 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lm6wc" podUID="f7526a98-a284-45c2-aeb2-cce4ddcd8f45" Apr 23 17:55:30.418034 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:30.418012 2572 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:55:32.414050 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:32.414019 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:55:32.414478 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:32.414019 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:55:32.414478 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:32.414120 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-n95c8" podUID="91e65909-6fc5-43ad-9403-4e762e15651f" Apr 23 17:55:32.414478 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:32.414019 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:55:32.414478 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:32.414219 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9wq98" podUID="4b5c0501-ab5e-4cac-9c9f-f306624ec47f" Apr 23 17:55:32.414478 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:32.414311 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lm6wc" podUID="f7526a98-a284-45c2-aeb2-cce4ddcd8f45" Apr 23 17:55:34.413903 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:34.413859 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:55:34.413903 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:34.413896 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:55:34.414322 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:34.413921 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:55:34.414322 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:34.413988 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9wq98" podUID="4b5c0501-ab5e-4cac-9c9f-f306624ec47f" Apr 23 17:55:34.414322 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:34.414041 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-n95c8" podUID="91e65909-6fc5-43ad-9403-4e762e15651f" Apr 23 17:55:34.414322 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:34.414117 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lm6wc" podUID="f7526a98-a284-45c2-aeb2-cce4ddcd8f45" Apr 23 17:55:35.409691 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:35.409645 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-f9ndr" podStartSLOduration=52.584169809 podStartE2EDuration="1m25.409631254s" podCreationTimestamp="2026-04-23 17:54:10 +0000 UTC" firstStartedPulling="2026-04-23 17:54:42.742781641 +0000 UTC m=+157.937686350" lastFinishedPulling="2026-04-23 17:55:15.568243086 +0000 UTC m=+190.763147795" observedRunningTime="2026-04-23 17:55:18.863686135 +0000 UTC m=+194.058590863" watchObservedRunningTime="2026-04-23 17:55:35.409631254 +0000 UTC m=+210.604535982" Apr 23 17:55:35.418909 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:35.418881 2572 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:55:36.414044 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:36.414007 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:55:36.414044 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:36.414025 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:55:36.414282 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:36.414233 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lm6wc" podUID="f7526a98-a284-45c2-aeb2-cce4ddcd8f45" Apr 23 17:55:36.414349 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:36.414295 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-n95c8" podUID="91e65909-6fc5-43ad-9403-4e762e15651f" Apr 23 17:55:36.414349 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:36.414332 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:55:36.414479 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:36.414417 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9wq98" podUID="4b5c0501-ab5e-4cac-9c9f-f306624ec47f" Apr 23 17:55:36.695035 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:36.694956 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-lm6wc"] Apr 23 17:55:36.698235 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:36.698204 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-9wq98"] Apr 23 17:55:36.698908 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:36.698883 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-n95c8"] Apr 23 17:55:36.803947 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:36.803899 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" podUID="3c2da17f-0591-4850-9fa2-fde2a8c1a8d5" containerName="ovnkube-controller" probeResult="failure" output="" Apr 23 17:55:36.864773 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:36.864739 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:55:36.864946 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:36.864739 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:55:36.864946 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:36.864864 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-n95c8" podUID="91e65909-6fc5-43ad-9403-4e762e15651f" Apr 23 17:55:36.864946 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:36.864739 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:55:36.864946 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:36.864916 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lm6wc" podUID="f7526a98-a284-45c2-aeb2-cce4ddcd8f45" Apr 23 17:55:36.865155 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:36.864966 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9wq98" podUID="4b5c0501-ab5e-4cac-9c9f-f306624ec47f" Apr 23 17:55:38.414234 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:38.414200 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:55:38.414649 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:38.414302 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-n95c8" podUID="91e65909-6fc5-43ad-9403-4e762e15651f" Apr 23 17:55:38.414649 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:38.414205 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:55:38.414649 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:38.414205 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:55:38.414649 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:38.414458 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lm6wc" podUID="f7526a98-a284-45c2-aeb2-cce4ddcd8f45" Apr 23 17:55:38.414649 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:38.414514 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9wq98" podUID="4b5c0501-ab5e-4cac-9c9f-f306624ec47f" Apr 23 17:55:40.414668 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:40.414586 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:55:40.415114 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:40.414587 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:55:40.415114 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:40.414688 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9wq98" podUID="4b5c0501-ab5e-4cac-9c9f-f306624ec47f" Apr 23 17:55:40.415114 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:40.414798 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-n95c8" podUID="91e65909-6fc5-43ad-9403-4e762e15651f" Apr 23 17:55:40.415114 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:40.414587 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:55:40.415114 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:40.414897 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lm6wc" podUID="f7526a98-a284-45c2-aeb2-cce4ddcd8f45" Apr 23 17:55:40.420052 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:40.420030 2572 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 23 17:55:42.414703 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:42.414666 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:55:42.415139 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:42.414661 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:55:42.415139 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:42.414789 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-n95c8" podUID="91e65909-6fc5-43ad-9403-4e762e15651f" Apr 23 17:55:42.415139 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:42.414663 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:55:42.415139 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:42.414902 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9wq98" podUID="4b5c0501-ab5e-4cac-9c9f-f306624ec47f" Apr 23 17:55:42.415139 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:42.414940 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lm6wc" podUID="f7526a98-a284-45c2-aeb2-cce4ddcd8f45" Apr 23 17:55:43.662746 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.662718 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-7dmlv"] Apr 23 17:55:43.669216 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.669197 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-7dmlv" Apr 23 17:55:43.671567 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.671543 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-kube-rbac-proxy-config\"" Apr 23 17:55:43.671701 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.671591 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kube-root-ca.crt\"" Apr 23 17:55:43.671701 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.671662 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"metrics-client-ca\"" Apr 23 17:55:43.671844 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.671714 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"openshift-service-ca.crt\"" Apr 23 17:55:43.672172 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.672152 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-tls\"" Apr 23 17:55:43.672259 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.672174 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-dockercfg-ctqbm\"" Apr 23 17:55:43.672391 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.672380 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"node-exporter-accelerators-collector-config\"" Apr 23 17:55:43.860562 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.860522 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e178bb76-2a9b-4c0b-a47c-8be8d733a32a-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-7dmlv\" (UID: \"e178bb76-2a9b-4c0b-a47c-8be8d733a32a\") " pod="openshift-monitoring/node-exporter-7dmlv" Apr 23 17:55:43.860562 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.860562 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/e178bb76-2a9b-4c0b-a47c-8be8d733a32a-node-exporter-accelerators-collector-config\") pod \"node-exporter-7dmlv\" (UID: \"e178bb76-2a9b-4c0b-a47c-8be8d733a32a\") " pod="openshift-monitoring/node-exporter-7dmlv" Apr 23 17:55:43.860764 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.860663 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/e178bb76-2a9b-4c0b-a47c-8be8d733a32a-node-exporter-tls\") pod \"node-exporter-7dmlv\" (UID: \"e178bb76-2a9b-4c0b-a47c-8be8d733a32a\") " pod="openshift-monitoring/node-exporter-7dmlv" Apr 23 17:55:43.860764 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.860691 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktw76\" (UniqueName: \"kubernetes.io/projected/e178bb76-2a9b-4c0b-a47c-8be8d733a32a-kube-api-access-ktw76\") pod \"node-exporter-7dmlv\" (UID: \"e178bb76-2a9b-4c0b-a47c-8be8d733a32a\") " pod="openshift-monitoring/node-exporter-7dmlv" Apr 23 17:55:43.860764 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.860713 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e178bb76-2a9b-4c0b-a47c-8be8d733a32a-sys\") pod \"node-exporter-7dmlv\" (UID: \"e178bb76-2a9b-4c0b-a47c-8be8d733a32a\") " pod="openshift-monitoring/node-exporter-7dmlv" Apr 23 17:55:43.860875 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.860794 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e178bb76-2a9b-4c0b-a47c-8be8d733a32a-metrics-client-ca\") pod \"node-exporter-7dmlv\" (UID: \"e178bb76-2a9b-4c0b-a47c-8be8d733a32a\") " pod="openshift-monitoring/node-exporter-7dmlv" Apr 23 17:55:43.860875 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.860819 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/e178bb76-2a9b-4c0b-a47c-8be8d733a32a-node-exporter-textfile\") pod \"node-exporter-7dmlv\" (UID: \"e178bb76-2a9b-4c0b-a47c-8be8d733a32a\") " pod="openshift-monitoring/node-exporter-7dmlv" Apr 23 17:55:43.860875 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.860838 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/e178bb76-2a9b-4c0b-a47c-8be8d733a32a-node-exporter-wtmp\") pod \"node-exporter-7dmlv\" (UID: \"e178bb76-2a9b-4c0b-a47c-8be8d733a32a\") " pod="openshift-monitoring/node-exporter-7dmlv" Apr 23 17:55:43.860973 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.860877 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/e178bb76-2a9b-4c0b-a47c-8be8d733a32a-root\") pod \"node-exporter-7dmlv\" (UID: \"e178bb76-2a9b-4c0b-a47c-8be8d733a32a\") " pod="openshift-monitoring/node-exporter-7dmlv" Apr 23 17:55:43.961882 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.961853 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/e178bb76-2a9b-4c0b-a47c-8be8d733a32a-node-exporter-tls\") pod \"node-exporter-7dmlv\" (UID: \"e178bb76-2a9b-4c0b-a47c-8be8d733a32a\") " pod="openshift-monitoring/node-exporter-7dmlv" Apr 23 17:55:43.962058 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.961893 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ktw76\" (UniqueName: \"kubernetes.io/projected/e178bb76-2a9b-4c0b-a47c-8be8d733a32a-kube-api-access-ktw76\") pod \"node-exporter-7dmlv\" (UID: \"e178bb76-2a9b-4c0b-a47c-8be8d733a32a\") " pod="openshift-monitoring/node-exporter-7dmlv" Apr 23 17:55:43.962058 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.961920 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e178bb76-2a9b-4c0b-a47c-8be8d733a32a-sys\") pod \"node-exporter-7dmlv\" (UID: \"e178bb76-2a9b-4c0b-a47c-8be8d733a32a\") " pod="openshift-monitoring/node-exporter-7dmlv" Apr 23 17:55:43.962058 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.961996 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e178bb76-2a9b-4c0b-a47c-8be8d733a32a-metrics-client-ca\") pod \"node-exporter-7dmlv\" (UID: \"e178bb76-2a9b-4c0b-a47c-8be8d733a32a\") " pod="openshift-monitoring/node-exporter-7dmlv" Apr 23 17:55:43.962058 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.962041 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/e178bb76-2a9b-4c0b-a47c-8be8d733a32a-node-exporter-textfile\") pod \"node-exporter-7dmlv\" (UID: \"e178bb76-2a9b-4c0b-a47c-8be8d733a32a\") " pod="openshift-monitoring/node-exporter-7dmlv" Apr 23 17:55:43.962263 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.962180 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e178bb76-2a9b-4c0b-a47c-8be8d733a32a-sys\") pod \"node-exporter-7dmlv\" (UID: \"e178bb76-2a9b-4c0b-a47c-8be8d733a32a\") " pod="openshift-monitoring/node-exporter-7dmlv" Apr 23 17:55:43.962263 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.962220 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/e178bb76-2a9b-4c0b-a47c-8be8d733a32a-node-exporter-wtmp\") pod \"node-exporter-7dmlv\" (UID: \"e178bb76-2a9b-4c0b-a47c-8be8d733a32a\") " pod="openshift-monitoring/node-exporter-7dmlv" Apr 23 17:55:43.962346 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.962275 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/e178bb76-2a9b-4c0b-a47c-8be8d733a32a-root\") pod \"node-exporter-7dmlv\" (UID: \"e178bb76-2a9b-4c0b-a47c-8be8d733a32a\") " pod="openshift-monitoring/node-exporter-7dmlv" Apr 23 17:55:43.962346 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.962322 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e178bb76-2a9b-4c0b-a47c-8be8d733a32a-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-7dmlv\" (UID: \"e178bb76-2a9b-4c0b-a47c-8be8d733a32a\") " pod="openshift-monitoring/node-exporter-7dmlv" Apr 23 17:55:43.962346 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.962326 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/e178bb76-2a9b-4c0b-a47c-8be8d733a32a-node-exporter-wtmp\") pod \"node-exporter-7dmlv\" (UID: \"e178bb76-2a9b-4c0b-a47c-8be8d733a32a\") " pod="openshift-monitoring/node-exporter-7dmlv" Apr 23 17:55:43.962541 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.962350 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/e178bb76-2a9b-4c0b-a47c-8be8d733a32a-node-exporter-accelerators-collector-config\") pod \"node-exporter-7dmlv\" (UID: \"e178bb76-2a9b-4c0b-a47c-8be8d733a32a\") " pod="openshift-monitoring/node-exporter-7dmlv" Apr 23 17:55:43.962541 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.962370 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/e178bb76-2a9b-4c0b-a47c-8be8d733a32a-root\") pod \"node-exporter-7dmlv\" (UID: \"e178bb76-2a9b-4c0b-a47c-8be8d733a32a\") " pod="openshift-monitoring/node-exporter-7dmlv" Apr 23 17:55:43.962541 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.962438 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/e178bb76-2a9b-4c0b-a47c-8be8d733a32a-node-exporter-textfile\") pod \"node-exporter-7dmlv\" (UID: \"e178bb76-2a9b-4c0b-a47c-8be8d733a32a\") " pod="openshift-monitoring/node-exporter-7dmlv" Apr 23 17:55:43.962719 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.962699 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e178bb76-2a9b-4c0b-a47c-8be8d733a32a-metrics-client-ca\") pod \"node-exporter-7dmlv\" (UID: \"e178bb76-2a9b-4c0b-a47c-8be8d733a32a\") " pod="openshift-monitoring/node-exporter-7dmlv" Apr 23 17:55:43.962851 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.962836 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/e178bb76-2a9b-4c0b-a47c-8be8d733a32a-node-exporter-accelerators-collector-config\") pod \"node-exporter-7dmlv\" (UID: \"e178bb76-2a9b-4c0b-a47c-8be8d733a32a\") " pod="openshift-monitoring/node-exporter-7dmlv" Apr 23 17:55:43.965873 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.965853 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/e178bb76-2a9b-4c0b-a47c-8be8d733a32a-node-exporter-tls\") pod \"node-exporter-7dmlv\" (UID: \"e178bb76-2a9b-4c0b-a47c-8be8d733a32a\") " pod="openshift-monitoring/node-exporter-7dmlv" Apr 23 17:55:43.965873 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.965860 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e178bb76-2a9b-4c0b-a47c-8be8d733a32a-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-7dmlv\" (UID: \"e178bb76-2a9b-4c0b-a47c-8be8d733a32a\") " pod="openshift-monitoring/node-exporter-7dmlv" Apr 23 17:55:43.977799 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.977770 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktw76\" (UniqueName: \"kubernetes.io/projected/e178bb76-2a9b-4c0b-a47c-8be8d733a32a-kube-api-access-ktw76\") pod \"node-exporter-7dmlv\" (UID: \"e178bb76-2a9b-4c0b-a47c-8be8d733a32a\") " pod="openshift-monitoring/node-exporter-7dmlv" Apr 23 17:55:43.978578 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:43.978564 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-7dmlv" Apr 23 17:55:43.989624 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:55:43.989598 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode178bb76_2a9b_4c0b_a47c_8be8d733a32a.slice/crio-ebe4b7230e14cc648cf52f3aa7da5e695227ec77d69f2092820d12b71b6769aa WatchSource:0}: Error finding container ebe4b7230e14cc648cf52f3aa7da5e695227ec77d69f2092820d12b71b6769aa: Status 404 returned error can't find the container with id ebe4b7230e14cc648cf52f3aa7da5e695227ec77d69f2092820d12b71b6769aa Apr 23 17:55:44.413898 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:44.413822 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:55:44.414040 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:44.413930 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9wq98" podUID="4b5c0501-ab5e-4cac-9c9f-f306624ec47f" Apr 23 17:55:44.414040 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:44.414010 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:55:44.414133 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:44.414116 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lm6wc" podUID="f7526a98-a284-45c2-aeb2-cce4ddcd8f45" Apr 23 17:55:44.414186 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:44.414175 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:55:44.414245 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:44.414232 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-n95c8" podUID="91e65909-6fc5-43ad-9403-4e762e15651f" Apr 23 17:55:44.881240 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:44.881207 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-7dmlv" event={"ID":"e178bb76-2a9b-4c0b-a47c-8be8d733a32a","Type":"ContainerStarted","Data":"ebe4b7230e14cc648cf52f3aa7da5e695227ec77d69f2092820d12b71b6769aa"} Apr 23 17:55:45.884237 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:45.884200 2572 generic.go:358] "Generic (PLEG): container finished" podID="e178bb76-2a9b-4c0b-a47c-8be8d733a32a" containerID="e320ff69d691d51fe7344949cc430d91cba35d4ff3f93f7d7f8aff28fcc75edd" exitCode=0 Apr 23 17:55:45.884627 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:45.884285 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-7dmlv" event={"ID":"e178bb76-2a9b-4c0b-a47c-8be8d733a32a","Type":"ContainerDied","Data":"e320ff69d691d51fe7344949cc430d91cba35d4ff3f93f7d7f8aff28fcc75edd"} Apr 23 17:55:46.280271 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:46.280236 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/91e65909-6fc5-43ad-9403-4e762e15651f-original-pull-secret\") pod \"global-pull-secret-syncer-n95c8\" (UID: \"91e65909-6fc5-43ad-9403-4e762e15651f\") " pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:55:46.280385 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:46.280288 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7526a98-a284-45c2-aeb2-cce4ddcd8f45-metrics-certs\") pod \"network-metrics-daemon-lm6wc\" (UID: \"f7526a98-a284-45c2-aeb2-cce4ddcd8f45\") " pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:55:46.280446 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:46.280379 2572 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:55:46.280446 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:46.280392 2572 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:46.280514 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:46.280461 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91e65909-6fc5-43ad-9403-4e762e15651f-original-pull-secret podName:91e65909-6fc5-43ad-9403-4e762e15651f nodeName:}" failed. No retries permitted until 2026-04-23 17:56:50.280443443 +0000 UTC m=+285.475348153 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/91e65909-6fc5-43ad-9403-4e762e15651f-original-pull-secret") pod "global-pull-secret-syncer-n95c8" (UID: "91e65909-6fc5-43ad-9403-4e762e15651f") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:55:46.280514 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:46.280474 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7526a98-a284-45c2-aeb2-cce4ddcd8f45-metrics-certs podName:f7526a98-a284-45c2-aeb2-cce4ddcd8f45 nodeName:}" failed. No retries permitted until 2026-04-23 17:56:50.280467854 +0000 UTC m=+285.475372559 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f7526a98-a284-45c2-aeb2-cce4ddcd8f45-metrics-certs") pod "network-metrics-daemon-lm6wc" (UID: "f7526a98-a284-45c2-aeb2-cce4ddcd8f45") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:55:46.380630 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:46.380589 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vl8gx\" (UniqueName: \"kubernetes.io/projected/4b5c0501-ab5e-4cac-9c9f-f306624ec47f-kube-api-access-vl8gx\") pod \"network-check-target-9wq98\" (UID: \"4b5c0501-ab5e-4cac-9c9f-f306624ec47f\") " pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:55:46.380791 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:46.380756 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:55:46.380791 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:46.380778 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:55:46.380791 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:46.380789 2572 projected.go:194] Error preparing data for projected volume kube-api-access-vl8gx for pod openshift-network-diagnostics/network-check-target-9wq98: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:46.380903 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:55:46.380838 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4b5c0501-ab5e-4cac-9c9f-f306624ec47f-kube-api-access-vl8gx podName:4b5c0501-ab5e-4cac-9c9f-f306624ec47f nodeName:}" failed. No retries permitted until 2026-04-23 17:56:50.38082431 +0000 UTC m=+285.575729015 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-vl8gx" (UniqueName: "kubernetes.io/projected/4b5c0501-ab5e-4cac-9c9f-f306624ec47f-kube-api-access-vl8gx") pod "network-check-target-9wq98" (UID: "4b5c0501-ab5e-4cac-9c9f-f306624ec47f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:55:46.413890 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:46.413808 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:55:46.413890 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:46.413837 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:55:46.414151 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:46.413993 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:55:46.416515 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:46.416496 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-q5nnb\"" Apr 23 17:55:46.416617 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:46.416575 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"original-pull-secret\"" Apr 23 17:55:46.417076 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:46.417053 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-r6xp2\"" Apr 23 17:55:46.417076 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:46.417068 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 23 17:55:46.417076 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:46.417075 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 23 17:55:46.417303 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:46.417286 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 23 17:55:46.887830 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:46.887798 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-7dmlv" event={"ID":"e178bb76-2a9b-4c0b-a47c-8be8d733a32a","Type":"ContainerStarted","Data":"b3412c2923a409de9e4d3a65aec819d39c2d0fc2cbba24231c502f9e5e019cf1"} Apr 23 17:55:46.887830 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:46.887829 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-7dmlv" event={"ID":"e178bb76-2a9b-4c0b-a47c-8be8d733a32a","Type":"ContainerStarted","Data":"04342d512dd5ce91763caaddfe810124c1212a47e9d91eec324c90159d5bf2d5"} Apr 23 17:55:52.727634 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.727604 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-133-178.ec2.internal" event="NodeReady" Apr 23 17:55:52.774514 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.774459 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-7dmlv" podStartSLOduration=8.531567917 podStartE2EDuration="9.774443313s" podCreationTimestamp="2026-04-23 17:55:43 +0000 UTC" firstStartedPulling="2026-04-23 17:55:43.991187876 +0000 UTC m=+219.186092595" lastFinishedPulling="2026-04-23 17:55:45.234063276 +0000 UTC m=+220.428967991" observedRunningTime="2026-04-23 17:55:46.910270419 +0000 UTC m=+222.105175146" watchObservedRunningTime="2026-04-23 17:55:52.774443313 +0000 UTC m=+227.969348031" Apr 23 17:55:52.774982 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.774956 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-jljrn"] Apr 23 17:55:52.778178 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.778163 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-jljrn" Apr 23 17:55:52.780719 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.780702 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Apr 23 17:55:52.780926 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.780911 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-xzdf6\"" Apr 23 17:55:52.780982 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.780929 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Apr 23 17:55:52.781841 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.781827 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Apr 23 17:55:52.786066 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.786047 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-runtime-extractor-vn5xz"] Apr 23 17:55:52.788816 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.788800 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-vn5xz" Apr 23 17:55:52.789465 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.789447 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-jljrn"] Apr 23 17:55:52.790817 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.790793 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-rbac-proxy\"" Apr 23 17:55:52.790942 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.790797 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-tls\"" Apr 23 17:55:52.791178 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.791156 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-sa-dockercfg-9q5jf\"" Apr 23 17:55:52.791266 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.791207 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-root-ca.crt\"" Apr 23 17:55:52.791332 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.791311 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"openshift-service-ca.crt\"" Apr 23 17:55:52.800414 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.800387 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-vn5xz"] Apr 23 17:55:52.807079 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.807048 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-vlmvx"] Apr 23 17:55:52.810628 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.810612 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-vlmvx" Apr 23 17:55:52.812785 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.812759 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Apr 23 17:55:52.812935 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.812920 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-jll6l\"" Apr 23 17:55:52.813004 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.812988 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Apr 23 17:55:52.821165 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.821147 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/3cebb2bf-0419-4c26-b3b0-732d1737d1b3-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-vn5xz\" (UID: \"3cebb2bf-0419-4c26-b3b0-732d1737d1b3\") " pod="openshift-insights/insights-runtime-extractor-vn5xz" Apr 23 17:55:52.821257 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.821172 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx58p\" (UniqueName: \"kubernetes.io/projected/d451234e-ffc1-49bd-b43f-8b0057291cc5-kube-api-access-hx58p\") pod \"dns-default-vlmvx\" (UID: \"d451234e-ffc1-49bd-b43f-8b0057291cc5\") " pod="openshift-dns/dns-default-vlmvx" Apr 23 17:55:52.821257 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.821188 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b33314e-4870-41bb-a49e-503d87fbf785-cert\") pod \"ingress-canary-jljrn\" (UID: \"9b33314e-4870-41bb-a49e-503d87fbf785\") " pod="openshift-ingress-canary/ingress-canary-jljrn" Apr 23 17:55:52.821257 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.821206 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldh2b\" (UniqueName: \"kubernetes.io/projected/3cebb2bf-0419-4c26-b3b0-732d1737d1b3-kube-api-access-ldh2b\") pod \"insights-runtime-extractor-vn5xz\" (UID: \"3cebb2bf-0419-4c26-b3b0-732d1737d1b3\") " pod="openshift-insights/insights-runtime-extractor-vn5xz" Apr 23 17:55:52.821257 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.821224 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tqqv\" (UniqueName: \"kubernetes.io/projected/9b33314e-4870-41bb-a49e-503d87fbf785-kube-api-access-2tqqv\") pod \"ingress-canary-jljrn\" (UID: \"9b33314e-4870-41bb-a49e-503d87fbf785\") " pod="openshift-ingress-canary/ingress-canary-jljrn" Apr 23 17:55:52.821443 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.821294 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/3cebb2bf-0419-4c26-b3b0-732d1737d1b3-data-volume\") pod \"insights-runtime-extractor-vn5xz\" (UID: \"3cebb2bf-0419-4c26-b3b0-732d1737d1b3\") " pod="openshift-insights/insights-runtime-extractor-vn5xz" Apr 23 17:55:52.821443 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.821352 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d451234e-ffc1-49bd-b43f-8b0057291cc5-config-volume\") pod \"dns-default-vlmvx\" (UID: \"d451234e-ffc1-49bd-b43f-8b0057291cc5\") " pod="openshift-dns/dns-default-vlmvx" Apr 23 17:55:52.821443 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.821380 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/3cebb2bf-0419-4c26-b3b0-732d1737d1b3-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-vn5xz\" (UID: \"3cebb2bf-0419-4c26-b3b0-732d1737d1b3\") " pod="openshift-insights/insights-runtime-extractor-vn5xz" Apr 23 17:55:52.821443 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.821412 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d451234e-ffc1-49bd-b43f-8b0057291cc5-metrics-tls\") pod \"dns-default-vlmvx\" (UID: \"d451234e-ffc1-49bd-b43f-8b0057291cc5\") " pod="openshift-dns/dns-default-vlmvx" Apr 23 17:55:52.821443 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.821439 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d451234e-ffc1-49bd-b43f-8b0057291cc5-tmp-dir\") pod \"dns-default-vlmvx\" (UID: \"d451234e-ffc1-49bd-b43f-8b0057291cc5\") " pod="openshift-dns/dns-default-vlmvx" Apr 23 17:55:52.821614 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.821456 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/3cebb2bf-0419-4c26-b3b0-732d1737d1b3-crio-socket\") pod \"insights-runtime-extractor-vn5xz\" (UID: \"3cebb2bf-0419-4c26-b3b0-732d1737d1b3\") " pod="openshift-insights/insights-runtime-extractor-vn5xz" Apr 23 17:55:52.825426 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.825388 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-vlmvx"] Apr 23 17:55:52.921733 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.921703 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/3cebb2bf-0419-4c26-b3b0-732d1737d1b3-data-volume\") pod \"insights-runtime-extractor-vn5xz\" (UID: \"3cebb2bf-0419-4c26-b3b0-732d1737d1b3\") " pod="openshift-insights/insights-runtime-extractor-vn5xz" Apr 23 17:55:52.921885 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.921748 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d451234e-ffc1-49bd-b43f-8b0057291cc5-config-volume\") pod \"dns-default-vlmvx\" (UID: \"d451234e-ffc1-49bd-b43f-8b0057291cc5\") " pod="openshift-dns/dns-default-vlmvx" Apr 23 17:55:52.921885 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.921866 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/3cebb2bf-0419-4c26-b3b0-732d1737d1b3-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-vn5xz\" (UID: \"3cebb2bf-0419-4c26-b3b0-732d1737d1b3\") " pod="openshift-insights/insights-runtime-extractor-vn5xz" Apr 23 17:55:52.921969 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.921897 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d451234e-ffc1-49bd-b43f-8b0057291cc5-metrics-tls\") pod \"dns-default-vlmvx\" (UID: \"d451234e-ffc1-49bd-b43f-8b0057291cc5\") " pod="openshift-dns/dns-default-vlmvx" Apr 23 17:55:52.921969 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.921943 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d451234e-ffc1-49bd-b43f-8b0057291cc5-tmp-dir\") pod \"dns-default-vlmvx\" (UID: \"d451234e-ffc1-49bd-b43f-8b0057291cc5\") " pod="openshift-dns/dns-default-vlmvx" Apr 23 17:55:52.922076 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.921974 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/3cebb2bf-0419-4c26-b3b0-732d1737d1b3-crio-socket\") pod \"insights-runtime-extractor-vn5xz\" (UID: \"3cebb2bf-0419-4c26-b3b0-732d1737d1b3\") " pod="openshift-insights/insights-runtime-extractor-vn5xz" Apr 23 17:55:52.922076 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.922036 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/3cebb2bf-0419-4c26-b3b0-732d1737d1b3-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-vn5xz\" (UID: \"3cebb2bf-0419-4c26-b3b0-732d1737d1b3\") " pod="openshift-insights/insights-runtime-extractor-vn5xz" Apr 23 17:55:52.922076 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.922061 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hx58p\" (UniqueName: \"kubernetes.io/projected/d451234e-ffc1-49bd-b43f-8b0057291cc5-kube-api-access-hx58p\") pod \"dns-default-vlmvx\" (UID: \"d451234e-ffc1-49bd-b43f-8b0057291cc5\") " pod="openshift-dns/dns-default-vlmvx" Apr 23 17:55:52.922195 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.922083 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b33314e-4870-41bb-a49e-503d87fbf785-cert\") pod \"ingress-canary-jljrn\" (UID: \"9b33314e-4870-41bb-a49e-503d87fbf785\") " pod="openshift-ingress-canary/ingress-canary-jljrn" Apr 23 17:55:52.922195 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.922087 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/3cebb2bf-0419-4c26-b3b0-732d1737d1b3-data-volume\") pod \"insights-runtime-extractor-vn5xz\" (UID: \"3cebb2bf-0419-4c26-b3b0-732d1737d1b3\") " pod="openshift-insights/insights-runtime-extractor-vn5xz" Apr 23 17:55:52.922195 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.922144 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/3cebb2bf-0419-4c26-b3b0-732d1737d1b3-crio-socket\") pod \"insights-runtime-extractor-vn5xz\" (UID: \"3cebb2bf-0419-4c26-b3b0-732d1737d1b3\") " pod="openshift-insights/insights-runtime-extractor-vn5xz" Apr 23 17:55:52.922195 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.922187 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ldh2b\" (UniqueName: \"kubernetes.io/projected/3cebb2bf-0419-4c26-b3b0-732d1737d1b3-kube-api-access-ldh2b\") pod \"insights-runtime-extractor-vn5xz\" (UID: \"3cebb2bf-0419-4c26-b3b0-732d1737d1b3\") " pod="openshift-insights/insights-runtime-extractor-vn5xz" Apr 23 17:55:52.922434 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.922214 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2tqqv\" (UniqueName: \"kubernetes.io/projected/9b33314e-4870-41bb-a49e-503d87fbf785-kube-api-access-2tqqv\") pod \"ingress-canary-jljrn\" (UID: \"9b33314e-4870-41bb-a49e-503d87fbf785\") " pod="openshift-ingress-canary/ingress-canary-jljrn" Apr 23 17:55:52.922503 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.922479 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/3cebb2bf-0419-4c26-b3b0-732d1737d1b3-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-vn5xz\" (UID: \"3cebb2bf-0419-4c26-b3b0-732d1737d1b3\") " pod="openshift-insights/insights-runtime-extractor-vn5xz" Apr 23 17:55:52.922688 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.922655 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d451234e-ffc1-49bd-b43f-8b0057291cc5-tmp-dir\") pod \"dns-default-vlmvx\" (UID: \"d451234e-ffc1-49bd-b43f-8b0057291cc5\") " pod="openshift-dns/dns-default-vlmvx" Apr 23 17:55:52.922822 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.922804 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d451234e-ffc1-49bd-b43f-8b0057291cc5-config-volume\") pod \"dns-default-vlmvx\" (UID: \"d451234e-ffc1-49bd-b43f-8b0057291cc5\") " pod="openshift-dns/dns-default-vlmvx" Apr 23 17:55:52.924361 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.924335 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/3cebb2bf-0419-4c26-b3b0-732d1737d1b3-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-vn5xz\" (UID: \"3cebb2bf-0419-4c26-b3b0-732d1737d1b3\") " pod="openshift-insights/insights-runtime-extractor-vn5xz" Apr 23 17:55:52.924461 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.924394 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d451234e-ffc1-49bd-b43f-8b0057291cc5-metrics-tls\") pod \"dns-default-vlmvx\" (UID: \"d451234e-ffc1-49bd-b43f-8b0057291cc5\") " pod="openshift-dns/dns-default-vlmvx" Apr 23 17:55:52.924591 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.924570 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b33314e-4870-41bb-a49e-503d87fbf785-cert\") pod \"ingress-canary-jljrn\" (UID: \"9b33314e-4870-41bb-a49e-503d87fbf785\") " pod="openshift-ingress-canary/ingress-canary-jljrn" Apr 23 17:55:52.930046 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.930025 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tqqv\" (UniqueName: \"kubernetes.io/projected/9b33314e-4870-41bb-a49e-503d87fbf785-kube-api-access-2tqqv\") pod \"ingress-canary-jljrn\" (UID: \"9b33314e-4870-41bb-a49e-503d87fbf785\") " pod="openshift-ingress-canary/ingress-canary-jljrn" Apr 23 17:55:52.930130 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.930102 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx58p\" (UniqueName: \"kubernetes.io/projected/d451234e-ffc1-49bd-b43f-8b0057291cc5-kube-api-access-hx58p\") pod \"dns-default-vlmvx\" (UID: \"d451234e-ffc1-49bd-b43f-8b0057291cc5\") " pod="openshift-dns/dns-default-vlmvx" Apr 23 17:55:52.931296 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:52.931280 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldh2b\" (UniqueName: \"kubernetes.io/projected/3cebb2bf-0419-4c26-b3b0-732d1737d1b3-kube-api-access-ldh2b\") pod \"insights-runtime-extractor-vn5xz\" (UID: \"3cebb2bf-0419-4c26-b3b0-732d1737d1b3\") " pod="openshift-insights/insights-runtime-extractor-vn5xz" Apr 23 17:55:53.086941 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:53.086849 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-jljrn" Apr 23 17:55:53.097589 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:53.097566 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-vn5xz" Apr 23 17:55:53.118142 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:53.118119 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-vlmvx" Apr 23 17:55:53.253923 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:53.253891 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-jljrn"] Apr 23 17:55:53.258045 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:55:53.258019 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b33314e_4870_41bb_a49e_503d87fbf785.slice/crio-30cf0c717afd6995f1cf064e0c2f0258026897cac8f475bed112b084187e8d48 WatchSource:0}: Error finding container 30cf0c717afd6995f1cf064e0c2f0258026897cac8f475bed112b084187e8d48: Status 404 returned error can't find the container with id 30cf0c717afd6995f1cf064e0c2f0258026897cac8f475bed112b084187e8d48 Apr 23 17:55:53.264828 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:53.264802 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-vn5xz"] Apr 23 17:55:53.268172 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:55:53.268146 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3cebb2bf_0419_4c26_b3b0_732d1737d1b3.slice/crio-2f7e8802fb3eca2b0183d7725eebc7fb1e35e5a8a0c5b7bd1b3718c474053193 WatchSource:0}: Error finding container 2f7e8802fb3eca2b0183d7725eebc7fb1e35e5a8a0c5b7bd1b3718c474053193: Status 404 returned error can't find the container with id 2f7e8802fb3eca2b0183d7725eebc7fb1e35e5a8a0c5b7bd1b3718c474053193 Apr 23 17:55:53.288894 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:53.288870 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-vlmvx"] Apr 23 17:55:53.300391 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:55:53.300369 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd451234e_ffc1_49bd_b43f_8b0057291cc5.slice/crio-f5e912c78ccb2b304449c75f7dc8f45a95526c7c8d8ae93b94d0d96db8959e0c WatchSource:0}: Error finding container f5e912c78ccb2b304449c75f7dc8f45a95526c7c8d8ae93b94d0d96db8959e0c: Status 404 returned error can't find the container with id f5e912c78ccb2b304449c75f7dc8f45a95526c7c8d8ae93b94d0d96db8959e0c Apr 23 17:55:53.905353 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:53.905319 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-vlmvx" event={"ID":"d451234e-ffc1-49bd-b43f-8b0057291cc5","Type":"ContainerStarted","Data":"f5e912c78ccb2b304449c75f7dc8f45a95526c7c8d8ae93b94d0d96db8959e0c"} Apr 23 17:55:53.907866 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:53.907835 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-vn5xz" event={"ID":"3cebb2bf-0419-4c26-b3b0-732d1737d1b3","Type":"ContainerStarted","Data":"bca94b9eeb389056c251b7b05c3f88d5774a81b75938f66240da3c8479aa4799"} Apr 23 17:55:53.907970 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:53.907876 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-vn5xz" event={"ID":"3cebb2bf-0419-4c26-b3b0-732d1737d1b3","Type":"ContainerStarted","Data":"7057908078ad3ff06ca50224a4e7ac6c428d4049875d6f32419cc8d462df65f4"} Apr 23 17:55:53.907970 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:53.907890 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-vn5xz" event={"ID":"3cebb2bf-0419-4c26-b3b0-732d1737d1b3","Type":"ContainerStarted","Data":"2f7e8802fb3eca2b0183d7725eebc7fb1e35e5a8a0c5b7bd1b3718c474053193"} Apr 23 17:55:53.909128 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:53.909103 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-jljrn" event={"ID":"9b33314e-4870-41bb-a49e-503d87fbf785","Type":"ContainerStarted","Data":"30cf0c717afd6995f1cf064e0c2f0258026897cac8f475bed112b084187e8d48"} Apr 23 17:55:55.919603 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:55.919571 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-vlmvx" event={"ID":"d451234e-ffc1-49bd-b43f-8b0057291cc5","Type":"ContainerStarted","Data":"1e3493bb1692877ab14721e343896f6151c7843f7c428d8d869e1ea81cde80d2"} Apr 23 17:55:55.919985 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:55.919612 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-vlmvx" event={"ID":"d451234e-ffc1-49bd-b43f-8b0057291cc5","Type":"ContainerStarted","Data":"0fd99f2800d2fc16257b8a9a8f5fef2dcddc0ad3acd6e2fb0ca3119370fe9770"} Apr 23 17:55:55.919985 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:55.919776 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-vlmvx" Apr 23 17:55:55.921206 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:55.921178 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-vn5xz" event={"ID":"3cebb2bf-0419-4c26-b3b0-732d1737d1b3","Type":"ContainerStarted","Data":"596ac699fe47894cf6757e6a8932e387f518ff3412814f8b35d541a49c4c26a3"} Apr 23 17:55:55.922318 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:55.922293 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-jljrn" event={"ID":"9b33314e-4870-41bb-a49e-503d87fbf785","Type":"ContainerStarted","Data":"26f44e759fc0097ba057ac0e9cd0e7f9b024b0b1f8756ddc20544bb880c5cbdd"} Apr 23 17:55:55.948340 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:55.948295 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-vlmvx" podStartSLOduration=1.635301139 podStartE2EDuration="3.948281148s" podCreationTimestamp="2026-04-23 17:55:52 +0000 UTC" firstStartedPulling="2026-04-23 17:55:53.302271268 +0000 UTC m=+228.497175976" lastFinishedPulling="2026-04-23 17:55:55.615251273 +0000 UTC m=+230.810155985" observedRunningTime="2026-04-23 17:55:55.947626199 +0000 UTC m=+231.142530925" watchObservedRunningTime="2026-04-23 17:55:55.948281148 +0000 UTC m=+231.143185874" Apr 23 17:55:55.978242 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:55.978189 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-jljrn" podStartSLOduration=1.620924799 podStartE2EDuration="3.978177551s" podCreationTimestamp="2026-04-23 17:55:52 +0000 UTC" firstStartedPulling="2026-04-23 17:55:53.259821662 +0000 UTC m=+228.454726368" lastFinishedPulling="2026-04-23 17:55:55.617074411 +0000 UTC m=+230.811979120" observedRunningTime="2026-04-23 17:55:55.977987502 +0000 UTC m=+231.172892229" watchObservedRunningTime="2026-04-23 17:55:55.978177551 +0000 UTC m=+231.173082278" Apr 23 17:55:56.015864 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.015809 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-runtime-extractor-vn5xz" podStartSLOduration=1.725566917 podStartE2EDuration="4.015794667s" podCreationTimestamp="2026-04-23 17:55:52 +0000 UTC" firstStartedPulling="2026-04-23 17:55:53.328352011 +0000 UTC m=+228.523256716" lastFinishedPulling="2026-04-23 17:55:55.618579757 +0000 UTC m=+230.813484466" observedRunningTime="2026-04-23 17:55:56.015664833 +0000 UTC m=+231.210569560" watchObservedRunningTime="2026-04-23 17:55:56.015794667 +0000 UTC m=+231.210699432" Apr 23 17:55:56.241904 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.241868 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-666f9dcf86-8jflb"] Apr 23 17:55:56.244878 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.244857 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-666f9dcf86-8jflb" Apr 23 17:55:56.264799 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.264779 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Apr 23 17:55:56.264799 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.264791 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Apr 23 17:55:56.264970 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.264784 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Apr 23 17:55:56.265119 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.265107 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Apr 23 17:55:56.268573 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.268554 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-666f9dcf86-8jflb"] Apr 23 17:55:56.269585 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.269571 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-5lt52\"" Apr 23 17:55:56.286672 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.286649 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Apr 23 17:55:56.286756 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.286715 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Apr 23 17:55:56.286790 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.286765 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Apr 23 17:55:56.314712 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.314691 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Apr 23 17:55:56.347150 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.347127 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fee966db-171e-4a49-aae4-3ef43f71e4e6-console-serving-cert\") pod \"console-666f9dcf86-8jflb\" (UID: \"fee966db-171e-4a49-aae4-3ef43f71e4e6\") " pod="openshift-console/console-666f9dcf86-8jflb" Apr 23 17:55:56.347261 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.347179 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fee966db-171e-4a49-aae4-3ef43f71e4e6-oauth-serving-cert\") pod \"console-666f9dcf86-8jflb\" (UID: \"fee966db-171e-4a49-aae4-3ef43f71e4e6\") " pod="openshift-console/console-666f9dcf86-8jflb" Apr 23 17:55:56.347261 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.347232 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fee966db-171e-4a49-aae4-3ef43f71e4e6-trusted-ca-bundle\") pod \"console-666f9dcf86-8jflb\" (UID: \"fee966db-171e-4a49-aae4-3ef43f71e4e6\") " pod="openshift-console/console-666f9dcf86-8jflb" Apr 23 17:55:56.347342 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.347263 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fee966db-171e-4a49-aae4-3ef43f71e4e6-console-config\") pod \"console-666f9dcf86-8jflb\" (UID: \"fee966db-171e-4a49-aae4-3ef43f71e4e6\") " pod="openshift-console/console-666f9dcf86-8jflb" Apr 23 17:55:56.347342 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.347300 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fee966db-171e-4a49-aae4-3ef43f71e4e6-service-ca\") pod \"console-666f9dcf86-8jflb\" (UID: \"fee966db-171e-4a49-aae4-3ef43f71e4e6\") " pod="openshift-console/console-666f9dcf86-8jflb" Apr 23 17:55:56.347426 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.347342 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fee966db-171e-4a49-aae4-3ef43f71e4e6-console-oauth-config\") pod \"console-666f9dcf86-8jflb\" (UID: \"fee966db-171e-4a49-aae4-3ef43f71e4e6\") " pod="openshift-console/console-666f9dcf86-8jflb" Apr 23 17:55:56.347426 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.347372 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcvxn\" (UniqueName: \"kubernetes.io/projected/fee966db-171e-4a49-aae4-3ef43f71e4e6-kube-api-access-lcvxn\") pod \"console-666f9dcf86-8jflb\" (UID: \"fee966db-171e-4a49-aae4-3ef43f71e4e6\") " pod="openshift-console/console-666f9dcf86-8jflb" Apr 23 17:55:56.448657 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.448626 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lcvxn\" (UniqueName: \"kubernetes.io/projected/fee966db-171e-4a49-aae4-3ef43f71e4e6-kube-api-access-lcvxn\") pod \"console-666f9dcf86-8jflb\" (UID: \"fee966db-171e-4a49-aae4-3ef43f71e4e6\") " pod="openshift-console/console-666f9dcf86-8jflb" Apr 23 17:55:56.448757 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.448668 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fee966db-171e-4a49-aae4-3ef43f71e4e6-console-serving-cert\") pod \"console-666f9dcf86-8jflb\" (UID: \"fee966db-171e-4a49-aae4-3ef43f71e4e6\") " pod="openshift-console/console-666f9dcf86-8jflb" Apr 23 17:55:56.448757 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.448720 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fee966db-171e-4a49-aae4-3ef43f71e4e6-oauth-serving-cert\") pod \"console-666f9dcf86-8jflb\" (UID: \"fee966db-171e-4a49-aae4-3ef43f71e4e6\") " pod="openshift-console/console-666f9dcf86-8jflb" Apr 23 17:55:56.448836 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.448762 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fee966db-171e-4a49-aae4-3ef43f71e4e6-trusted-ca-bundle\") pod \"console-666f9dcf86-8jflb\" (UID: \"fee966db-171e-4a49-aae4-3ef43f71e4e6\") " pod="openshift-console/console-666f9dcf86-8jflb" Apr 23 17:55:56.449994 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.448893 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fee966db-171e-4a49-aae4-3ef43f71e4e6-console-config\") pod \"console-666f9dcf86-8jflb\" (UID: \"fee966db-171e-4a49-aae4-3ef43f71e4e6\") " pod="openshift-console/console-666f9dcf86-8jflb" Apr 23 17:55:56.449994 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.449331 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fee966db-171e-4a49-aae4-3ef43f71e4e6-service-ca\") pod \"console-666f9dcf86-8jflb\" (UID: \"fee966db-171e-4a49-aae4-3ef43f71e4e6\") " pod="openshift-console/console-666f9dcf86-8jflb" Apr 23 17:55:56.449994 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.449389 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fee966db-171e-4a49-aae4-3ef43f71e4e6-console-oauth-config\") pod \"console-666f9dcf86-8jflb\" (UID: \"fee966db-171e-4a49-aae4-3ef43f71e4e6\") " pod="openshift-console/console-666f9dcf86-8jflb" Apr 23 17:55:56.449994 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.449823 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fee966db-171e-4a49-aae4-3ef43f71e4e6-oauth-serving-cert\") pod \"console-666f9dcf86-8jflb\" (UID: \"fee966db-171e-4a49-aae4-3ef43f71e4e6\") " pod="openshift-console/console-666f9dcf86-8jflb" Apr 23 17:55:56.450258 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.450143 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fee966db-171e-4a49-aae4-3ef43f71e4e6-trusted-ca-bundle\") pod \"console-666f9dcf86-8jflb\" (UID: \"fee966db-171e-4a49-aae4-3ef43f71e4e6\") " pod="openshift-console/console-666f9dcf86-8jflb" Apr 23 17:55:56.453374 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.450633 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fee966db-171e-4a49-aae4-3ef43f71e4e6-console-config\") pod \"console-666f9dcf86-8jflb\" (UID: \"fee966db-171e-4a49-aae4-3ef43f71e4e6\") " pod="openshift-console/console-666f9dcf86-8jflb" Apr 23 17:55:56.453374 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.450906 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fee966db-171e-4a49-aae4-3ef43f71e4e6-service-ca\") pod \"console-666f9dcf86-8jflb\" (UID: \"fee966db-171e-4a49-aae4-3ef43f71e4e6\") " pod="openshift-console/console-666f9dcf86-8jflb" Apr 23 17:55:56.453374 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.452087 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fee966db-171e-4a49-aae4-3ef43f71e4e6-console-serving-cert\") pod \"console-666f9dcf86-8jflb\" (UID: \"fee966db-171e-4a49-aae4-3ef43f71e4e6\") " pod="openshift-console/console-666f9dcf86-8jflb" Apr 23 17:55:56.454160 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.454140 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fee966db-171e-4a49-aae4-3ef43f71e4e6-console-oauth-config\") pod \"console-666f9dcf86-8jflb\" (UID: \"fee966db-171e-4a49-aae4-3ef43f71e4e6\") " pod="openshift-console/console-666f9dcf86-8jflb" Apr 23 17:55:56.457602 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.457585 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcvxn\" (UniqueName: \"kubernetes.io/projected/fee966db-171e-4a49-aae4-3ef43f71e4e6-kube-api-access-lcvxn\") pod \"console-666f9dcf86-8jflb\" (UID: \"fee966db-171e-4a49-aae4-3ef43f71e4e6\") " pod="openshift-console/console-666f9dcf86-8jflb" Apr 23 17:55:56.553379 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.553292 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-666f9dcf86-8jflb" Apr 23 17:55:56.666768 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.666738 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-666f9dcf86-8jflb"] Apr 23 17:55:56.670322 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:55:56.670296 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfee966db_171e_4a49_aae4_3ef43f71e4e6.slice/crio-e014b952d3e658090970cf0b019a4c41648eb5f7e2a96f48c6f139e0bef89bf4 WatchSource:0}: Error finding container e014b952d3e658090970cf0b019a4c41648eb5f7e2a96f48c6f139e0bef89bf4: Status 404 returned error can't find the container with id e014b952d3e658090970cf0b019a4c41648eb5f7e2a96f48c6f139e0bef89bf4 Apr 23 17:55:56.925816 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:56.925727 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-666f9dcf86-8jflb" event={"ID":"fee966db-171e-4a49-aae4-3ef43f71e4e6","Type":"ContainerStarted","Data":"e014b952d3e658090970cf0b019a4c41648eb5f7e2a96f48c6f139e0bef89bf4"} Apr 23 17:55:59.935123 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:59.935086 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-666f9dcf86-8jflb" event={"ID":"fee966db-171e-4a49-aae4-3ef43f71e4e6","Type":"ContainerStarted","Data":"852cbc53594ecd07b45d7f531aece706c78e524aa906aa21074c94588c8b6c89"} Apr 23 17:55:59.958424 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:55:59.958354 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-666f9dcf86-8jflb" podStartSLOduration=1.558492455 podStartE2EDuration="3.958339836s" podCreationTimestamp="2026-04-23 17:55:56 +0000 UTC" firstStartedPulling="2026-04-23 17:55:56.672648233 +0000 UTC m=+231.867552937" lastFinishedPulling="2026-04-23 17:55:59.072495594 +0000 UTC m=+234.267400318" observedRunningTime="2026-04-23 17:55:59.957766143 +0000 UTC m=+235.152670880" watchObservedRunningTime="2026-04-23 17:55:59.958339836 +0000 UTC m=+235.153244564" Apr 23 17:56:05.927770 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:05.927742 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-vlmvx" Apr 23 17:56:06.553864 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:06.553824 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-666f9dcf86-8jflb" Apr 23 17:56:06.554082 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:06.553912 2572 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-666f9dcf86-8jflb" Apr 23 17:56:06.559350 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:06.559326 2572 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-666f9dcf86-8jflb" Apr 23 17:56:06.802355 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:06.802317 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-v9pcc" Apr 23 17:56:06.956104 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:06.956024 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-666f9dcf86-8jflb" Apr 23 17:56:45.820726 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:45.820697 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7ff7b5c5f-hsbhb"] Apr 23 17:56:45.823464 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:45.823449 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7ff7b5c5f-hsbhb" Apr 23 17:56:45.826024 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:45.826001 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"managed-serviceaccount-hub-kubeconfig\"" Apr 23 17:56:45.826704 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:45.826681 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"open-cluster-management-image-pull-credentials\"" Apr 23 17:56:45.826808 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:45.826701 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"open-cluster-management-agent-addon\"/\"openshift-service-ca.crt\"" Apr 23 17:56:45.826808 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:45.826733 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"open-cluster-management-agent-addon\"/\"kube-root-ca.crt\"" Apr 23 17:56:45.826808 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:45.826686 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"managed-serviceaccount-dockercfg-4b7mq\"" Apr 23 17:56:45.834820 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:45.834795 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7ff7b5c5f-hsbhb"] Apr 23 17:56:45.844546 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:45.844525 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-67b95c6dc5-b2wm5"] Apr 23 17:56:45.847197 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:45.847182 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-67b95c6dc5-b2wm5" Apr 23 17:56:45.850209 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:45.850189 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"work-manager-hub-kubeconfig\"" Apr 23 17:56:45.861416 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:45.861380 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-67b95c6dc5-b2wm5"] Apr 23 17:56:45.927768 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:45.927737 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/8b6eab33-869b-4c5a-ac2f-02ed423628e1-hub-kubeconfig\") pod \"managed-serviceaccount-addon-agent-7ff7b5c5f-hsbhb\" (UID: \"8b6eab33-869b-4c5a-ac2f-02ed423628e1\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7ff7b5c5f-hsbhb" Apr 23 17:56:45.927900 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:45.927778 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km97j\" (UniqueName: \"kubernetes.io/projected/8b6eab33-869b-4c5a-ac2f-02ed423628e1-kube-api-access-km97j\") pod \"managed-serviceaccount-addon-agent-7ff7b5c5f-hsbhb\" (UID: \"8b6eab33-869b-4c5a-ac2f-02ed423628e1\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7ff7b5c5f-hsbhb" Apr 23 17:56:46.014917 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.014885 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["open-cluster-management-agent-addon/cluster-proxy-proxy-agent-554c79c7c7-7zwbr"] Apr 23 17:56:46.017871 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.017857 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-554c79c7c7-7zwbr" Apr 23 17:56:46.019848 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.019829 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-ca\"" Apr 23 17:56:46.019984 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.019935 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-hub-kubeconfig\"" Apr 23 17:56:46.021219 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.021190 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-open-cluster-management.io-proxy-agent-signer-client-cert\"" Apr 23 17:56:46.021314 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.021289 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-service-proxy-server-certificates\"" Apr 23 17:56:46.028777 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.028757 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9a4fe3ae-4a7d-44f1-9109-de77cc26d1e9-tmp\") pod \"klusterlet-addon-workmgr-67b95c6dc5-b2wm5\" (UID: \"9a4fe3ae-4a7d-44f1-9109-de77cc26d1e9\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-67b95c6dc5-b2wm5" Apr 23 17:56:46.028891 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.028787 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd95k\" (UniqueName: \"kubernetes.io/projected/9a4fe3ae-4a7d-44f1-9109-de77cc26d1e9-kube-api-access-xd95k\") pod \"klusterlet-addon-workmgr-67b95c6dc5-b2wm5\" (UID: \"9a4fe3ae-4a7d-44f1-9109-de77cc26d1e9\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-67b95c6dc5-b2wm5" Apr 23 17:56:46.028891 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.028815 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/9a4fe3ae-4a7d-44f1-9109-de77cc26d1e9-klusterlet-config\") pod \"klusterlet-addon-workmgr-67b95c6dc5-b2wm5\" (UID: \"9a4fe3ae-4a7d-44f1-9109-de77cc26d1e9\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-67b95c6dc5-b2wm5" Apr 23 17:56:46.029004 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.028907 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/8b6eab33-869b-4c5a-ac2f-02ed423628e1-hub-kubeconfig\") pod \"managed-serviceaccount-addon-agent-7ff7b5c5f-hsbhb\" (UID: \"8b6eab33-869b-4c5a-ac2f-02ed423628e1\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7ff7b5c5f-hsbhb" Apr 23 17:56:46.029004 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.028960 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-km97j\" (UniqueName: \"kubernetes.io/projected/8b6eab33-869b-4c5a-ac2f-02ed423628e1-kube-api-access-km97j\") pod \"managed-serviceaccount-addon-agent-7ff7b5c5f-hsbhb\" (UID: \"8b6eab33-869b-4c5a-ac2f-02ed423628e1\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7ff7b5c5f-hsbhb" Apr 23 17:56:46.031182 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.031166 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/8b6eab33-869b-4c5a-ac2f-02ed423628e1-hub-kubeconfig\") pod \"managed-serviceaccount-addon-agent-7ff7b5c5f-hsbhb\" (UID: \"8b6eab33-869b-4c5a-ac2f-02ed423628e1\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7ff7b5c5f-hsbhb" Apr 23 17:56:46.038779 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.038759 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/cluster-proxy-proxy-agent-554c79c7c7-7zwbr"] Apr 23 17:56:46.042177 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.042150 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-km97j\" (UniqueName: \"kubernetes.io/projected/8b6eab33-869b-4c5a-ac2f-02ed423628e1-kube-api-access-km97j\") pod \"managed-serviceaccount-addon-agent-7ff7b5c5f-hsbhb\" (UID: \"8b6eab33-869b-4c5a-ac2f-02ed423628e1\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7ff7b5c5f-hsbhb" Apr 23 17:56:46.129832 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.129755 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hub\" (UniqueName: \"kubernetes.io/secret/48466f95-7bb8-4f90-a9c1-b4857d193715-hub\") pod \"cluster-proxy-proxy-agent-554c79c7c7-7zwbr\" (UID: \"48466f95-7bb8-4f90-a9c1-b4857d193715\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-554c79c7c7-7zwbr" Apr 23 17:56:46.129832 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.129805 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-proxy-server-cert\" (UniqueName: \"kubernetes.io/secret/48466f95-7bb8-4f90-a9c1-b4857d193715-service-proxy-server-cert\") pod \"cluster-proxy-proxy-agent-554c79c7c7-7zwbr\" (UID: \"48466f95-7bb8-4f90-a9c1-b4857d193715\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-554c79c7c7-7zwbr" Apr 23 17:56:46.129832 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.129827 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca\" (UniqueName: \"kubernetes.io/secret/48466f95-7bb8-4f90-a9c1-b4857d193715-ca\") pod \"cluster-proxy-proxy-agent-554c79c7c7-7zwbr\" (UID: \"48466f95-7bb8-4f90-a9c1-b4857d193715\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-554c79c7c7-7zwbr" Apr 23 17:56:46.130042 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.129844 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ocpservice-ca\" (UniqueName: \"kubernetes.io/configmap/48466f95-7bb8-4f90-a9c1-b4857d193715-ocpservice-ca\") pod \"cluster-proxy-proxy-agent-554c79c7c7-7zwbr\" (UID: \"48466f95-7bb8-4f90-a9c1-b4857d193715\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-554c79c7c7-7zwbr" Apr 23 17:56:46.130042 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.129873 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/48466f95-7bb8-4f90-a9c1-b4857d193715-hub-kubeconfig\") pod \"cluster-proxy-proxy-agent-554c79c7c7-7zwbr\" (UID: \"48466f95-7bb8-4f90-a9c1-b4857d193715\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-554c79c7c7-7zwbr" Apr 23 17:56:46.130042 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.129959 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgcpp\" (UniqueName: \"kubernetes.io/projected/48466f95-7bb8-4f90-a9c1-b4857d193715-kube-api-access-tgcpp\") pod \"cluster-proxy-proxy-agent-554c79c7c7-7zwbr\" (UID: \"48466f95-7bb8-4f90-a9c1-b4857d193715\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-554c79c7c7-7zwbr" Apr 23 17:56:46.130042 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.129995 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9a4fe3ae-4a7d-44f1-9109-de77cc26d1e9-tmp\") pod \"klusterlet-addon-workmgr-67b95c6dc5-b2wm5\" (UID: \"9a4fe3ae-4a7d-44f1-9109-de77cc26d1e9\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-67b95c6dc5-b2wm5" Apr 23 17:56:46.130042 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.130013 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xd95k\" (UniqueName: \"kubernetes.io/projected/9a4fe3ae-4a7d-44f1-9109-de77cc26d1e9-kube-api-access-xd95k\") pod \"klusterlet-addon-workmgr-67b95c6dc5-b2wm5\" (UID: \"9a4fe3ae-4a7d-44f1-9109-de77cc26d1e9\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-67b95c6dc5-b2wm5" Apr 23 17:56:46.130238 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.130071 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/9a4fe3ae-4a7d-44f1-9109-de77cc26d1e9-klusterlet-config\") pod \"klusterlet-addon-workmgr-67b95c6dc5-b2wm5\" (UID: \"9a4fe3ae-4a7d-44f1-9109-de77cc26d1e9\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-67b95c6dc5-b2wm5" Apr 23 17:56:46.130386 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.130365 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9a4fe3ae-4a7d-44f1-9109-de77cc26d1e9-tmp\") pod \"klusterlet-addon-workmgr-67b95c6dc5-b2wm5\" (UID: \"9a4fe3ae-4a7d-44f1-9109-de77cc26d1e9\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-67b95c6dc5-b2wm5" Apr 23 17:56:46.132452 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.132436 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/9a4fe3ae-4a7d-44f1-9109-de77cc26d1e9-klusterlet-config\") pod \"klusterlet-addon-workmgr-67b95c6dc5-b2wm5\" (UID: \"9a4fe3ae-4a7d-44f1-9109-de77cc26d1e9\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-67b95c6dc5-b2wm5" Apr 23 17:56:46.138375 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.138358 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7ff7b5c5f-hsbhb" Apr 23 17:56:46.141056 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.141032 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd95k\" (UniqueName: \"kubernetes.io/projected/9a4fe3ae-4a7d-44f1-9109-de77cc26d1e9-kube-api-access-xd95k\") pod \"klusterlet-addon-workmgr-67b95c6dc5-b2wm5\" (UID: \"9a4fe3ae-4a7d-44f1-9109-de77cc26d1e9\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-67b95c6dc5-b2wm5" Apr 23 17:56:46.155920 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.155884 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-67b95c6dc5-b2wm5" Apr 23 17:56:46.230911 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.230884 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hub\" (UniqueName: \"kubernetes.io/secret/48466f95-7bb8-4f90-a9c1-b4857d193715-hub\") pod \"cluster-proxy-proxy-agent-554c79c7c7-7zwbr\" (UID: \"48466f95-7bb8-4f90-a9c1-b4857d193715\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-554c79c7c7-7zwbr" Apr 23 17:56:46.231499 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.231473 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-proxy-server-cert\" (UniqueName: \"kubernetes.io/secret/48466f95-7bb8-4f90-a9c1-b4857d193715-service-proxy-server-cert\") pod \"cluster-proxy-proxy-agent-554c79c7c7-7zwbr\" (UID: \"48466f95-7bb8-4f90-a9c1-b4857d193715\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-554c79c7c7-7zwbr" Apr 23 17:56:46.231603 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.231524 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca\" (UniqueName: \"kubernetes.io/secret/48466f95-7bb8-4f90-a9c1-b4857d193715-ca\") pod \"cluster-proxy-proxy-agent-554c79c7c7-7zwbr\" (UID: \"48466f95-7bb8-4f90-a9c1-b4857d193715\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-554c79c7c7-7zwbr" Apr 23 17:56:46.231603 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.231554 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ocpservice-ca\" (UniqueName: \"kubernetes.io/configmap/48466f95-7bb8-4f90-a9c1-b4857d193715-ocpservice-ca\") pod \"cluster-proxy-proxy-agent-554c79c7c7-7zwbr\" (UID: \"48466f95-7bb8-4f90-a9c1-b4857d193715\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-554c79c7c7-7zwbr" Apr 23 17:56:46.231603 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.231587 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/48466f95-7bb8-4f90-a9c1-b4857d193715-hub-kubeconfig\") pod \"cluster-proxy-proxy-agent-554c79c7c7-7zwbr\" (UID: \"48466f95-7bb8-4f90-a9c1-b4857d193715\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-554c79c7c7-7zwbr" Apr 23 17:56:46.231759 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.231661 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tgcpp\" (UniqueName: \"kubernetes.io/projected/48466f95-7bb8-4f90-a9c1-b4857d193715-kube-api-access-tgcpp\") pod \"cluster-proxy-proxy-agent-554c79c7c7-7zwbr\" (UID: \"48466f95-7bb8-4f90-a9c1-b4857d193715\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-554c79c7c7-7zwbr" Apr 23 17:56:46.233121 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.233068 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ocpservice-ca\" (UniqueName: \"kubernetes.io/configmap/48466f95-7bb8-4f90-a9c1-b4857d193715-ocpservice-ca\") pod \"cluster-proxy-proxy-agent-554c79c7c7-7zwbr\" (UID: \"48466f95-7bb8-4f90-a9c1-b4857d193715\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-554c79c7c7-7zwbr" Apr 23 17:56:46.234811 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.234608 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-proxy-server-cert\" (UniqueName: \"kubernetes.io/secret/48466f95-7bb8-4f90-a9c1-b4857d193715-service-proxy-server-cert\") pod \"cluster-proxy-proxy-agent-554c79c7c7-7zwbr\" (UID: \"48466f95-7bb8-4f90-a9c1-b4857d193715\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-554c79c7c7-7zwbr" Apr 23 17:56:46.236030 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.236008 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hub\" (UniqueName: \"kubernetes.io/secret/48466f95-7bb8-4f90-a9c1-b4857d193715-hub\") pod \"cluster-proxy-proxy-agent-554c79c7c7-7zwbr\" (UID: \"48466f95-7bb8-4f90-a9c1-b4857d193715\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-554c79c7c7-7zwbr" Apr 23 17:56:46.236613 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.236555 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/48466f95-7bb8-4f90-a9c1-b4857d193715-hub-kubeconfig\") pod \"cluster-proxy-proxy-agent-554c79c7c7-7zwbr\" (UID: \"48466f95-7bb8-4f90-a9c1-b4857d193715\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-554c79c7c7-7zwbr" Apr 23 17:56:46.236613 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.236569 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca\" (UniqueName: \"kubernetes.io/secret/48466f95-7bb8-4f90-a9c1-b4857d193715-ca\") pod \"cluster-proxy-proxy-agent-554c79c7c7-7zwbr\" (UID: \"48466f95-7bb8-4f90-a9c1-b4857d193715\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-554c79c7c7-7zwbr" Apr 23 17:56:46.240318 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.240276 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgcpp\" (UniqueName: \"kubernetes.io/projected/48466f95-7bb8-4f90-a9c1-b4857d193715-kube-api-access-tgcpp\") pod \"cluster-proxy-proxy-agent-554c79c7c7-7zwbr\" (UID: \"48466f95-7bb8-4f90-a9c1-b4857d193715\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-554c79c7c7-7zwbr" Apr 23 17:56:46.263881 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.263853 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7ff7b5c5f-hsbhb"] Apr 23 17:56:46.267086 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:56:46.267057 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b6eab33_869b_4c5a_ac2f_02ed423628e1.slice/crio-ce39c904d7507a1680049d3f8dcdd735e0c8aeb227ed8fce4c53484f1df727ca WatchSource:0}: Error finding container ce39c904d7507a1680049d3f8dcdd735e0c8aeb227ed8fce4c53484f1df727ca: Status 404 returned error can't find the container with id ce39c904d7507a1680049d3f8dcdd735e0c8aeb227ed8fce4c53484f1df727ca Apr 23 17:56:46.278157 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.278133 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-67b95c6dc5-b2wm5"] Apr 23 17:56:46.281366 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:56:46.281347 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a4fe3ae_4a7d_44f1_9109_de77cc26d1e9.slice/crio-f78d968fd90e0a074bdb0dabc14db1776c064fbeffdcf98ae2c7647db82f59a0 WatchSource:0}: Error finding container f78d968fd90e0a074bdb0dabc14db1776c064fbeffdcf98ae2c7647db82f59a0: Status 404 returned error can't find the container with id f78d968fd90e0a074bdb0dabc14db1776c064fbeffdcf98ae2c7647db82f59a0 Apr 23 17:56:46.326594 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.326563 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-554c79c7c7-7zwbr" Apr 23 17:56:46.449319 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:46.449290 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/cluster-proxy-proxy-agent-554c79c7c7-7zwbr"] Apr 23 17:56:46.452432 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:56:46.452390 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48466f95_7bb8_4f90_a9c1_b4857d193715.slice/crio-90a751682df7c6ba5b7e084f86c85ced8a08fd128a043ac736a24a70484ec31a WatchSource:0}: Error finding container 90a751682df7c6ba5b7e084f86c85ced8a08fd128a043ac736a24a70484ec31a: Status 404 returned error can't find the container with id 90a751682df7c6ba5b7e084f86c85ced8a08fd128a043ac736a24a70484ec31a Apr 23 17:56:47.053721 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:47.053684 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-554c79c7c7-7zwbr" event={"ID":"48466f95-7bb8-4f90-a9c1-b4857d193715","Type":"ContainerStarted","Data":"90a751682df7c6ba5b7e084f86c85ced8a08fd128a043ac736a24a70484ec31a"} Apr 23 17:56:47.055108 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:47.055072 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7ff7b5c5f-hsbhb" event={"ID":"8b6eab33-869b-4c5a-ac2f-02ed423628e1","Type":"ContainerStarted","Data":"ce39c904d7507a1680049d3f8dcdd735e0c8aeb227ed8fce4c53484f1df727ca"} Apr 23 17:56:47.056311 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:47.056278 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-67b95c6dc5-b2wm5" event={"ID":"9a4fe3ae-4a7d-44f1-9109-de77cc26d1e9","Type":"ContainerStarted","Data":"f78d968fd90e0a074bdb0dabc14db1776c064fbeffdcf98ae2c7647db82f59a0"} Apr 23 17:56:50.367239 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:50.367191 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/91e65909-6fc5-43ad-9403-4e762e15651f-original-pull-secret\") pod \"global-pull-secret-syncer-n95c8\" (UID: \"91e65909-6fc5-43ad-9403-4e762e15651f\") " pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:56:50.367725 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:50.367298 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7526a98-a284-45c2-aeb2-cce4ddcd8f45-metrics-certs\") pod \"network-metrics-daemon-lm6wc\" (UID: \"f7526a98-a284-45c2-aeb2-cce4ddcd8f45\") " pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:56:50.369753 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:50.369726 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"original-pull-secret\"" Apr 23 17:56:50.370131 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:50.370105 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 23 17:56:50.380278 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:50.380255 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/91e65909-6fc5-43ad-9403-4e762e15651f-original-pull-secret\") pod \"global-pull-secret-syncer-n95c8\" (UID: \"91e65909-6fc5-43ad-9403-4e762e15651f\") " pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:56:50.380278 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:50.380273 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7526a98-a284-45c2-aeb2-cce4ddcd8f45-metrics-certs\") pod \"network-metrics-daemon-lm6wc\" (UID: \"f7526a98-a284-45c2-aeb2-cce4ddcd8f45\") " pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:56:50.468382 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:50.468341 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vl8gx\" (UniqueName: \"kubernetes.io/projected/4b5c0501-ab5e-4cac-9c9f-f306624ec47f-kube-api-access-vl8gx\") pod \"network-check-target-9wq98\" (UID: \"4b5c0501-ab5e-4cac-9c9f-f306624ec47f\") " pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:56:50.470790 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:50.470766 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 23 17:56:50.481094 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:50.481066 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 23 17:56:50.491815 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:50.491791 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl8gx\" (UniqueName: \"kubernetes.io/projected/4b5c0501-ab5e-4cac-9c9f-f306624ec47f-kube-api-access-vl8gx\") pod \"network-check-target-9wq98\" (UID: \"4b5c0501-ab5e-4cac-9c9f-f306624ec47f\") " pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:56:50.625535 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:50.625460 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-r6xp2\"" Apr 23 17:56:50.629150 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:50.629128 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-n95c8" Apr 23 17:56:50.633919 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:50.633895 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:56:50.634914 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:50.634894 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-q5nnb\"" Apr 23 17:56:50.643539 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:50.643518 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lm6wc" Apr 23 17:56:51.326043 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:51.326000 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-9wq98"] Apr 23 17:56:51.330473 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:56:51.330425 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b5c0501_ab5e_4cac_9c9f_f306624ec47f.slice/crio-6d1878b251627c8fd11a51a1f13b03e813ef91ca5e2abeb0d6eff20b7bba44c1 WatchSource:0}: Error finding container 6d1878b251627c8fd11a51a1f13b03e813ef91ca5e2abeb0d6eff20b7bba44c1: Status 404 returned error can't find the container with id 6d1878b251627c8fd11a51a1f13b03e813ef91ca5e2abeb0d6eff20b7bba44c1 Apr 23 17:56:51.544973 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:51.544942 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-n95c8"] Apr 23 17:56:51.548910 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:56:51.548872 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91e65909_6fc5_43ad_9403_4e762e15651f.slice/crio-2e80f6d8504bfaf25472806c2b6cef63596d3ab185b7600e8f2669ceae88b10a WatchSource:0}: Error finding container 2e80f6d8504bfaf25472806c2b6cef63596d3ab185b7600e8f2669ceae88b10a: Status 404 returned error can't find the container with id 2e80f6d8504bfaf25472806c2b6cef63596d3ab185b7600e8f2669ceae88b10a Apr 23 17:56:51.549048 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:51.548928 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-lm6wc"] Apr 23 17:56:51.554000 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:56:51.553972 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7526a98_a284_45c2_aeb2_cce4ddcd8f45.slice/crio-c29683cb88ff5f36a97b4ba59e34525dad964357283eb27a456fa8c86ea3efc1 WatchSource:0}: Error finding container c29683cb88ff5f36a97b4ba59e34525dad964357283eb27a456fa8c86ea3efc1: Status 404 returned error can't find the container with id c29683cb88ff5f36a97b4ba59e34525dad964357283eb27a456fa8c86ea3efc1 Apr 23 17:56:52.075273 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:52.075233 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-9wq98" event={"ID":"4b5c0501-ab5e-4cac-9c9f-f306624ec47f","Type":"ContainerStarted","Data":"6d1878b251627c8fd11a51a1f13b03e813ef91ca5e2abeb0d6eff20b7bba44c1"} Apr 23 17:56:52.077256 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:52.077221 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-554c79c7c7-7zwbr" event={"ID":"48466f95-7bb8-4f90-a9c1-b4857d193715","Type":"ContainerStarted","Data":"eed980f68e23c48f08c25f7fa73ceea34d0e04feb6c30eca06e200130eedd049"} Apr 23 17:56:52.078851 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:52.078823 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7ff7b5c5f-hsbhb" event={"ID":"8b6eab33-869b-4c5a-ac2f-02ed423628e1","Type":"ContainerStarted","Data":"48e15454f04d36b2dac221ee354abf44c7cab0e05511162b52863e972120e153"} Apr 23 17:56:52.081643 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:52.081608 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lm6wc" event={"ID":"f7526a98-a284-45c2-aeb2-cce4ddcd8f45","Type":"ContainerStarted","Data":"c29683cb88ff5f36a97b4ba59e34525dad964357283eb27a456fa8c86ea3efc1"} Apr 23 17:56:52.084077 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:52.084016 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-n95c8" event={"ID":"91e65909-6fc5-43ad-9403-4e762e15651f","Type":"ContainerStarted","Data":"2e80f6d8504bfaf25472806c2b6cef63596d3ab185b7600e8f2669ceae88b10a"} Apr 23 17:56:52.088199 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:52.088175 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-67b95c6dc5-b2wm5" event={"ID":"9a4fe3ae-4a7d-44f1-9109-de77cc26d1e9","Type":"ContainerStarted","Data":"e7add5c24d871b33f23c8f426113c8a9960283fe97ed5d947ab875fb3bc4a7ee"} Apr 23 17:56:52.088644 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:52.088618 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-67b95c6dc5-b2wm5" Apr 23 17:56:52.090776 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:52.090744 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-67b95c6dc5-b2wm5" Apr 23 17:56:52.098108 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:52.097619 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7ff7b5c5f-hsbhb" podStartSLOduration=2.204834593 podStartE2EDuration="7.097608876s" podCreationTimestamp="2026-04-23 17:56:45 +0000 UTC" firstStartedPulling="2026-04-23 17:56:46.26893656 +0000 UTC m=+281.463841265" lastFinishedPulling="2026-04-23 17:56:51.161710839 +0000 UTC m=+286.356615548" observedRunningTime="2026-04-23 17:56:52.096215808 +0000 UTC m=+287.291120532" watchObservedRunningTime="2026-04-23 17:56:52.097608876 +0000 UTC m=+287.292513604" Apr 23 17:56:52.114759 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:52.113713 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-67b95c6dc5-b2wm5" podStartSLOduration=2.21792676 podStartE2EDuration="7.113702246s" podCreationTimestamp="2026-04-23 17:56:45 +0000 UTC" firstStartedPulling="2026-04-23 17:56:46.28309409 +0000 UTC m=+281.477998798" lastFinishedPulling="2026-04-23 17:56:51.178869579 +0000 UTC m=+286.373774284" observedRunningTime="2026-04-23 17:56:52.113652058 +0000 UTC m=+287.308556787" watchObservedRunningTime="2026-04-23 17:56:52.113702246 +0000 UTC m=+287.308606973" Apr 23 17:56:53.094020 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:53.093915 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lm6wc" event={"ID":"f7526a98-a284-45c2-aeb2-cce4ddcd8f45","Type":"ContainerStarted","Data":"67a115cade0cf9d845cd5bc3511321b75f8bc6224cbfae4bb9c95edfccefff44"} Apr 23 17:56:53.094020 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:53.093962 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lm6wc" event={"ID":"f7526a98-a284-45c2-aeb2-cce4ddcd8f45","Type":"ContainerStarted","Data":"2133da304e4f95b886d3ae7c9eaf0d9ad60864c3c7bbb8a91fa86a9e42e6137a"} Apr 23 17:56:55.100744 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:55.100713 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-9wq98" event={"ID":"4b5c0501-ab5e-4cac-9c9f-f306624ec47f","Type":"ContainerStarted","Data":"5a663d6185fd0e9159255e8c9757f7061e34de31c6d88e387a56601fdaf09f35"} Apr 23 17:56:55.101161 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:55.100808 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:56:55.116890 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:55.116610 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-lm6wc" podStartSLOduration=164.019687625 podStartE2EDuration="2m45.116591985s" podCreationTimestamp="2026-04-23 17:54:10 +0000 UTC" firstStartedPulling="2026-04-23 17:56:51.556072958 +0000 UTC m=+286.750977670" lastFinishedPulling="2026-04-23 17:56:52.652977312 +0000 UTC m=+287.847882030" observedRunningTime="2026-04-23 17:56:53.123791456 +0000 UTC m=+288.318696183" watchObservedRunningTime="2026-04-23 17:56:55.116591985 +0000 UTC m=+290.311496713" Apr 23 17:56:55.117514 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:55.117469 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-target-9wq98" podStartSLOduration=162.156118563 podStartE2EDuration="2m45.117459069s" podCreationTimestamp="2026-04-23 17:54:10 +0000 UTC" firstStartedPulling="2026-04-23 17:56:51.33394232 +0000 UTC m=+286.528847027" lastFinishedPulling="2026-04-23 17:56:54.295282813 +0000 UTC m=+289.490187533" observedRunningTime="2026-04-23 17:56:55.116034201 +0000 UTC m=+290.310938929" watchObservedRunningTime="2026-04-23 17:56:55.117459069 +0000 UTC m=+290.312363800" Apr 23 17:56:57.107451 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:57.107395 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-n95c8" event={"ID":"91e65909-6fc5-43ad-9403-4e762e15651f","Type":"ContainerStarted","Data":"ab2ee0c45cc80eed2b49a59b6295b4ad5d6585c1fe27d46f4e5a6186da4be86d"} Apr 23 17:56:57.109189 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:57.109163 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-554c79c7c7-7zwbr" event={"ID":"48466f95-7bb8-4f90-a9c1-b4857d193715","Type":"ContainerStarted","Data":"2b85cdeef31fc02bc6b5de7eb2baf9d4a74da7415efdf1d0fb21bbce3bf9554a"} Apr 23 17:56:57.109189 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:57.109190 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-554c79c7c7-7zwbr" event={"ID":"48466f95-7bb8-4f90-a9c1-b4857d193715","Type":"ContainerStarted","Data":"b38805959baa03bd3ccee0177d0916ad360cb0cfb8bf86f151d2e74ac1d1629f"} Apr 23 17:56:57.140685 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:57.140640 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-554c79c7c7-7zwbr" podStartSLOduration=2.049989148 podStartE2EDuration="12.140625751s" podCreationTimestamp="2026-04-23 17:56:45 +0000 UTC" firstStartedPulling="2026-04-23 17:56:46.454103985 +0000 UTC m=+281.649008693" lastFinishedPulling="2026-04-23 17:56:56.54474058 +0000 UTC m=+291.739645296" observedRunningTime="2026-04-23 17:56:57.140340926 +0000 UTC m=+292.335245664" watchObservedRunningTime="2026-04-23 17:56:57.140625751 +0000 UTC m=+292.335530456" Apr 23 17:56:57.141840 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:56:57.141805 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/global-pull-secret-syncer-n95c8" podStartSLOduration=160.145016711 podStartE2EDuration="2m45.141793988s" podCreationTimestamp="2026-04-23 17:54:12 +0000 UTC" firstStartedPulling="2026-04-23 17:56:51.550989558 +0000 UTC m=+286.745894267" lastFinishedPulling="2026-04-23 17:56:56.54776684 +0000 UTC m=+291.742671544" observedRunningTime="2026-04-23 17:56:57.123649068 +0000 UTC m=+292.318553794" watchObservedRunningTime="2026-04-23 17:56:57.141793988 +0000 UTC m=+292.336698715" Apr 23 17:57:05.323152 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:05.323121 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9pcc_3c2da17f-0591-4850-9fa2-fde2a8c1a8d5/ovn-acl-logging/0.log" Apr 23 17:57:05.323714 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:05.323306 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9pcc_3c2da17f-0591-4850-9fa2-fde2a8c1a8d5/ovn-acl-logging/0.log" Apr 23 17:57:05.327644 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:05.327623 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_2e27a1d033408744b4b8c34c52f01b43/kube-rbac-proxy-crio/4.log" Apr 23 17:57:05.327760 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:05.327740 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_2e27a1d033408744b4b8c34c52f01b43/kube-rbac-proxy-crio/4.log" Apr 23 17:57:05.329688 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:05.329674 2572 kubelet.go:1628] "Image garbage collection succeeded" Apr 23 17:57:24.743277 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:24.743242 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-console/console-666f9dcf86-8jflb"] Apr 23 17:57:26.106126 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:26.106095 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-9wq98" Apr 23 17:57:49.761798 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:49.761740 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-console/console-666f9dcf86-8jflb" podUID="fee966db-171e-4a49-aae4-3ef43f71e4e6" containerName="console" containerID="cri-o://852cbc53594ecd07b45d7f531aece706c78e524aa906aa21074c94588c8b6c89" gracePeriod=15 Apr 23 17:57:49.993191 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:49.993172 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-666f9dcf86-8jflb_fee966db-171e-4a49-aae4-3ef43f71e4e6/console/0.log" Apr 23 17:57:49.993350 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:49.993238 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-666f9dcf86-8jflb" Apr 23 17:57:50.083635 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:50.083551 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fee966db-171e-4a49-aae4-3ef43f71e4e6-console-config\") pod \"fee966db-171e-4a49-aae4-3ef43f71e4e6\" (UID: \"fee966db-171e-4a49-aae4-3ef43f71e4e6\") " Apr 23 17:57:50.083635 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:50.083607 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fee966db-171e-4a49-aae4-3ef43f71e4e6-console-oauth-config\") pod \"fee966db-171e-4a49-aae4-3ef43f71e4e6\" (UID: \"fee966db-171e-4a49-aae4-3ef43f71e4e6\") " Apr 23 17:57:50.083861 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:50.083642 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fee966db-171e-4a49-aae4-3ef43f71e4e6-oauth-serving-cert\") pod \"fee966db-171e-4a49-aae4-3ef43f71e4e6\" (UID: \"fee966db-171e-4a49-aae4-3ef43f71e4e6\") " Apr 23 17:57:50.083861 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:50.083661 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fee966db-171e-4a49-aae4-3ef43f71e4e6-service-ca\") pod \"fee966db-171e-4a49-aae4-3ef43f71e4e6\" (UID: \"fee966db-171e-4a49-aae4-3ef43f71e4e6\") " Apr 23 17:57:50.083861 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:50.083679 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcvxn\" (UniqueName: \"kubernetes.io/projected/fee966db-171e-4a49-aae4-3ef43f71e4e6-kube-api-access-lcvxn\") pod \"fee966db-171e-4a49-aae4-3ef43f71e4e6\" (UID: \"fee966db-171e-4a49-aae4-3ef43f71e4e6\") " Apr 23 17:57:50.083861 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:50.083709 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fee966db-171e-4a49-aae4-3ef43f71e4e6-console-serving-cert\") pod \"fee966db-171e-4a49-aae4-3ef43f71e4e6\" (UID: \"fee966db-171e-4a49-aae4-3ef43f71e4e6\") " Apr 23 17:57:50.083861 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:50.083739 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fee966db-171e-4a49-aae4-3ef43f71e4e6-trusted-ca-bundle\") pod \"fee966db-171e-4a49-aae4-3ef43f71e4e6\" (UID: \"fee966db-171e-4a49-aae4-3ef43f71e4e6\") " Apr 23 17:57:50.084130 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:50.083920 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fee966db-171e-4a49-aae4-3ef43f71e4e6-console-config" (OuterVolumeSpecName: "console-config") pod "fee966db-171e-4a49-aae4-3ef43f71e4e6" (UID: "fee966db-171e-4a49-aae4-3ef43f71e4e6"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:57:50.084187 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:50.084135 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fee966db-171e-4a49-aae4-3ef43f71e4e6-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "fee966db-171e-4a49-aae4-3ef43f71e4e6" (UID: "fee966db-171e-4a49-aae4-3ef43f71e4e6"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:57:50.084235 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:50.084210 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fee966db-171e-4a49-aae4-3ef43f71e4e6-service-ca" (OuterVolumeSpecName: "service-ca") pod "fee966db-171e-4a49-aae4-3ef43f71e4e6" (UID: "fee966db-171e-4a49-aae4-3ef43f71e4e6"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:57:50.084344 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:50.084316 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fee966db-171e-4a49-aae4-3ef43f71e4e6-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "fee966db-171e-4a49-aae4-3ef43f71e4e6" (UID: "fee966db-171e-4a49-aae4-3ef43f71e4e6"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 17:57:50.086204 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:50.086174 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fee966db-171e-4a49-aae4-3ef43f71e4e6-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "fee966db-171e-4a49-aae4-3ef43f71e4e6" (UID: "fee966db-171e-4a49-aae4-3ef43f71e4e6"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:57:50.086204 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:50.086184 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fee966db-171e-4a49-aae4-3ef43f71e4e6-kube-api-access-lcvxn" (OuterVolumeSpecName: "kube-api-access-lcvxn") pod "fee966db-171e-4a49-aae4-3ef43f71e4e6" (UID: "fee966db-171e-4a49-aae4-3ef43f71e4e6"). InnerVolumeSpecName "kube-api-access-lcvxn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 17:57:50.086325 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:50.086231 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fee966db-171e-4a49-aae4-3ef43f71e4e6-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "fee966db-171e-4a49-aae4-3ef43f71e4e6" (UID: "fee966db-171e-4a49-aae4-3ef43f71e4e6"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 17:57:50.184604 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:50.184567 2572 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fee966db-171e-4a49-aae4-3ef43f71e4e6-oauth-serving-cert\") on node \"ip-10-0-133-178.ec2.internal\" DevicePath \"\"" Apr 23 17:57:50.184604 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:50.184598 2572 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fee966db-171e-4a49-aae4-3ef43f71e4e6-service-ca\") on node \"ip-10-0-133-178.ec2.internal\" DevicePath \"\"" Apr 23 17:57:50.184604 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:50.184608 2572 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lcvxn\" (UniqueName: \"kubernetes.io/projected/fee966db-171e-4a49-aae4-3ef43f71e4e6-kube-api-access-lcvxn\") on node \"ip-10-0-133-178.ec2.internal\" DevicePath \"\"" Apr 23 17:57:50.184820 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:50.184617 2572 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fee966db-171e-4a49-aae4-3ef43f71e4e6-console-serving-cert\") on node \"ip-10-0-133-178.ec2.internal\" DevicePath \"\"" Apr 23 17:57:50.184820 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:50.184626 2572 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fee966db-171e-4a49-aae4-3ef43f71e4e6-trusted-ca-bundle\") on node \"ip-10-0-133-178.ec2.internal\" DevicePath \"\"" Apr 23 17:57:50.184820 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:50.184634 2572 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fee966db-171e-4a49-aae4-3ef43f71e4e6-console-config\") on node \"ip-10-0-133-178.ec2.internal\" DevicePath \"\"" Apr 23 17:57:50.184820 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:50.184643 2572 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fee966db-171e-4a49-aae4-3ef43f71e4e6-console-oauth-config\") on node \"ip-10-0-133-178.ec2.internal\" DevicePath \"\"" Apr 23 17:57:50.257830 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:50.257801 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-666f9dcf86-8jflb_fee966db-171e-4a49-aae4-3ef43f71e4e6/console/0.log" Apr 23 17:57:50.257954 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:50.257838 2572 generic.go:358] "Generic (PLEG): container finished" podID="fee966db-171e-4a49-aae4-3ef43f71e4e6" containerID="852cbc53594ecd07b45d7f531aece706c78e524aa906aa21074c94588c8b6c89" exitCode=2 Apr 23 17:57:50.257954 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:50.257870 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-666f9dcf86-8jflb" event={"ID":"fee966db-171e-4a49-aae4-3ef43f71e4e6","Type":"ContainerDied","Data":"852cbc53594ecd07b45d7f531aece706c78e524aa906aa21074c94588c8b6c89"} Apr 23 17:57:50.257954 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:50.257901 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-666f9dcf86-8jflb" Apr 23 17:57:50.257954 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:50.257908 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-666f9dcf86-8jflb" event={"ID":"fee966db-171e-4a49-aae4-3ef43f71e4e6","Type":"ContainerDied","Data":"e014b952d3e658090970cf0b019a4c41648eb5f7e2a96f48c6f139e0bef89bf4"} Apr 23 17:57:50.257954 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:50.257923 2572 scope.go:117] "RemoveContainer" containerID="852cbc53594ecd07b45d7f531aece706c78e524aa906aa21074c94588c8b6c89" Apr 23 17:57:50.265715 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:50.265655 2572 scope.go:117] "RemoveContainer" containerID="852cbc53594ecd07b45d7f531aece706c78e524aa906aa21074c94588c8b6c89" Apr 23 17:57:50.265926 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:57:50.265902 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"852cbc53594ecd07b45d7f531aece706c78e524aa906aa21074c94588c8b6c89\": container with ID starting with 852cbc53594ecd07b45d7f531aece706c78e524aa906aa21074c94588c8b6c89 not found: ID does not exist" containerID="852cbc53594ecd07b45d7f531aece706c78e524aa906aa21074c94588c8b6c89" Apr 23 17:57:50.265986 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:50.265934 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"852cbc53594ecd07b45d7f531aece706c78e524aa906aa21074c94588c8b6c89"} err="failed to get container status \"852cbc53594ecd07b45d7f531aece706c78e524aa906aa21074c94588c8b6c89\": rpc error: code = NotFound desc = could not find container \"852cbc53594ecd07b45d7f531aece706c78e524aa906aa21074c94588c8b6c89\": container with ID starting with 852cbc53594ecd07b45d7f531aece706c78e524aa906aa21074c94588c8b6c89 not found: ID does not exist" Apr 23 17:57:50.278545 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:50.278519 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-console/console-666f9dcf86-8jflb"] Apr 23 17:57:50.282020 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:50.282001 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-666f9dcf86-8jflb"] Apr 23 17:57:51.417918 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:51.417884 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fee966db-171e-4a49-aae4-3ef43f71e4e6" path="/var/lib/kubelet/pods/fee966db-171e-4a49-aae4-3ef43f71e4e6/volumes" Apr 23 17:57:55.286544 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:55.286508 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cgwpp5"] Apr 23 17:57:55.286936 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:55.286720 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fee966db-171e-4a49-aae4-3ef43f71e4e6" containerName="console" Apr 23 17:57:55.286936 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:55.286733 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="fee966db-171e-4a49-aae4-3ef43f71e4e6" containerName="console" Apr 23 17:57:55.286936 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:55.286794 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="fee966db-171e-4a49-aae4-3ef43f71e4e6" containerName="console" Apr 23 17:57:55.291078 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:55.291058 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cgwpp5" Apr 23 17:57:55.293036 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:55.293013 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Apr 23 17:57:55.293123 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:55.293033 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Apr 23 17:57:55.293123 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:55.293040 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-k8fbk\"" Apr 23 17:57:55.297396 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:55.297375 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cgwpp5"] Apr 23 17:57:55.420145 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:55.420115 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f252e45e-4a0d-4870-83d8-8bdd0b96cbfb-util\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cgwpp5\" (UID: \"f252e45e-4a0d-4870-83d8-8bdd0b96cbfb\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cgwpp5" Apr 23 17:57:55.420310 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:55.420162 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq5bs\" (UniqueName: \"kubernetes.io/projected/f252e45e-4a0d-4870-83d8-8bdd0b96cbfb-kube-api-access-qq5bs\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cgwpp5\" (UID: \"f252e45e-4a0d-4870-83d8-8bdd0b96cbfb\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cgwpp5" Apr 23 17:57:55.420310 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:55.420183 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f252e45e-4a0d-4870-83d8-8bdd0b96cbfb-bundle\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cgwpp5\" (UID: \"f252e45e-4a0d-4870-83d8-8bdd0b96cbfb\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cgwpp5" Apr 23 17:57:55.521022 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:55.520988 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qq5bs\" (UniqueName: \"kubernetes.io/projected/f252e45e-4a0d-4870-83d8-8bdd0b96cbfb-kube-api-access-qq5bs\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cgwpp5\" (UID: \"f252e45e-4a0d-4870-83d8-8bdd0b96cbfb\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cgwpp5" Apr 23 17:57:55.521022 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:55.521025 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f252e45e-4a0d-4870-83d8-8bdd0b96cbfb-bundle\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cgwpp5\" (UID: \"f252e45e-4a0d-4870-83d8-8bdd0b96cbfb\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cgwpp5" Apr 23 17:57:55.521226 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:55.521060 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f252e45e-4a0d-4870-83d8-8bdd0b96cbfb-util\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cgwpp5\" (UID: \"f252e45e-4a0d-4870-83d8-8bdd0b96cbfb\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cgwpp5" Apr 23 17:57:55.521392 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:55.521373 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f252e45e-4a0d-4870-83d8-8bdd0b96cbfb-bundle\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cgwpp5\" (UID: \"f252e45e-4a0d-4870-83d8-8bdd0b96cbfb\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cgwpp5" Apr 23 17:57:55.521392 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:55.521388 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f252e45e-4a0d-4870-83d8-8bdd0b96cbfb-util\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cgwpp5\" (UID: \"f252e45e-4a0d-4870-83d8-8bdd0b96cbfb\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cgwpp5" Apr 23 17:57:55.529502 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:55.529479 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq5bs\" (UniqueName: \"kubernetes.io/projected/f252e45e-4a0d-4870-83d8-8bdd0b96cbfb-kube-api-access-qq5bs\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cgwpp5\" (UID: \"f252e45e-4a0d-4870-83d8-8bdd0b96cbfb\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cgwpp5" Apr 23 17:57:55.600749 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:55.600683 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cgwpp5" Apr 23 17:57:55.923628 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:55.923550 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cgwpp5"] Apr 23 17:57:55.926416 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:57:55.926378 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf252e45e_4a0d_4870_83d8_8bdd0b96cbfb.slice/crio-28a8847df61227c22b29f4d20865900146db68489e4222e0d4f3faabc872c779 WatchSource:0}: Error finding container 28a8847df61227c22b29f4d20865900146db68489e4222e0d4f3faabc872c779: Status 404 returned error can't find the container with id 28a8847df61227c22b29f4d20865900146db68489e4222e0d4f3faabc872c779 Apr 23 17:57:55.928128 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:55.928112 2572 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 17:57:56.274838 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:57:56.274792 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cgwpp5" event={"ID":"f252e45e-4a0d-4870-83d8-8bdd0b96cbfb","Type":"ContainerStarted","Data":"28a8847df61227c22b29f4d20865900146db68489e4222e0d4f3faabc872c779"} Apr 23 17:58:01.290411 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:01.290361 2572 generic.go:358] "Generic (PLEG): container finished" podID="f252e45e-4a0d-4870-83d8-8bdd0b96cbfb" containerID="a4f23fd83038ac7ef5c5ea559428b1790abadb145cc7e37bae53aa83969de694" exitCode=0 Apr 23 17:58:01.290788 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:01.290450 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cgwpp5" event={"ID":"f252e45e-4a0d-4870-83d8-8bdd0b96cbfb","Type":"ContainerDied","Data":"a4f23fd83038ac7ef5c5ea559428b1790abadb145cc7e37bae53aa83969de694"} Apr 23 17:58:03.296417 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:03.296313 2572 generic.go:358] "Generic (PLEG): container finished" podID="f252e45e-4a0d-4870-83d8-8bdd0b96cbfb" containerID="e6c431f72c1fda318a9f7aea731be1cb6913dff0593643db27353dd0d2821802" exitCode=0 Apr 23 17:58:03.296417 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:03.296384 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cgwpp5" event={"ID":"f252e45e-4a0d-4870-83d8-8bdd0b96cbfb","Type":"ContainerDied","Data":"e6c431f72c1fda318a9f7aea731be1cb6913dff0593643db27353dd0d2821802"} Apr 23 17:58:10.320971 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:10.320938 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cgwpp5" event={"ID":"f252e45e-4a0d-4870-83d8-8bdd0b96cbfb","Type":"ContainerStarted","Data":"6088f7bf4b7666941ff4b11c36609563645855aa46bdc8a002b1da81063698a4"} Apr 23 17:58:10.338302 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:10.338246 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cgwpp5" podStartSLOduration=1.06377792 podStartE2EDuration="15.338230062s" podCreationTimestamp="2026-04-23 17:57:55 +0000 UTC" firstStartedPulling="2026-04-23 17:57:55.928253789 +0000 UTC m=+351.123158495" lastFinishedPulling="2026-04-23 17:58:10.202705929 +0000 UTC m=+365.397610637" observedRunningTime="2026-04-23 17:58:10.337001263 +0000 UTC m=+365.531905990" watchObservedRunningTime="2026-04-23 17:58:10.338230062 +0000 UTC m=+365.533134790" Apr 23 17:58:11.327719 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:11.327679 2572 generic.go:358] "Generic (PLEG): container finished" podID="f252e45e-4a0d-4870-83d8-8bdd0b96cbfb" containerID="6088f7bf4b7666941ff4b11c36609563645855aa46bdc8a002b1da81063698a4" exitCode=0 Apr 23 17:58:11.328128 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:11.327765 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cgwpp5" event={"ID":"f252e45e-4a0d-4870-83d8-8bdd0b96cbfb","Type":"ContainerDied","Data":"6088f7bf4b7666941ff4b11c36609563645855aa46bdc8a002b1da81063698a4"} Apr 23 17:58:12.449712 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:12.449691 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cgwpp5" Apr 23 17:58:12.557046 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:12.556996 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f252e45e-4a0d-4870-83d8-8bdd0b96cbfb-util\") pod \"f252e45e-4a0d-4870-83d8-8bdd0b96cbfb\" (UID: \"f252e45e-4a0d-4870-83d8-8bdd0b96cbfb\") " Apr 23 17:58:12.557218 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:12.557061 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qq5bs\" (UniqueName: \"kubernetes.io/projected/f252e45e-4a0d-4870-83d8-8bdd0b96cbfb-kube-api-access-qq5bs\") pod \"f252e45e-4a0d-4870-83d8-8bdd0b96cbfb\" (UID: \"f252e45e-4a0d-4870-83d8-8bdd0b96cbfb\") " Apr 23 17:58:12.557218 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:12.557088 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f252e45e-4a0d-4870-83d8-8bdd0b96cbfb-bundle\") pod \"f252e45e-4a0d-4870-83d8-8bdd0b96cbfb\" (UID: \"f252e45e-4a0d-4870-83d8-8bdd0b96cbfb\") " Apr 23 17:58:12.557766 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:12.557735 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f252e45e-4a0d-4870-83d8-8bdd0b96cbfb-bundle" (OuterVolumeSpecName: "bundle") pod "f252e45e-4a0d-4870-83d8-8bdd0b96cbfb" (UID: "f252e45e-4a0d-4870-83d8-8bdd0b96cbfb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 23 17:58:12.559238 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:12.559218 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f252e45e-4a0d-4870-83d8-8bdd0b96cbfb-kube-api-access-qq5bs" (OuterVolumeSpecName: "kube-api-access-qq5bs") pod "f252e45e-4a0d-4870-83d8-8bdd0b96cbfb" (UID: "f252e45e-4a0d-4870-83d8-8bdd0b96cbfb"). InnerVolumeSpecName "kube-api-access-qq5bs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 17:58:12.562969 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:12.562939 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f252e45e-4a0d-4870-83d8-8bdd0b96cbfb-util" (OuterVolumeSpecName: "util") pod "f252e45e-4a0d-4870-83d8-8bdd0b96cbfb" (UID: "f252e45e-4a0d-4870-83d8-8bdd0b96cbfb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 23 17:58:12.658491 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:12.658393 2572 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qq5bs\" (UniqueName: \"kubernetes.io/projected/f252e45e-4a0d-4870-83d8-8bdd0b96cbfb-kube-api-access-qq5bs\") on node \"ip-10-0-133-178.ec2.internal\" DevicePath \"\"" Apr 23 17:58:12.658491 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:12.658439 2572 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f252e45e-4a0d-4870-83d8-8bdd0b96cbfb-bundle\") on node \"ip-10-0-133-178.ec2.internal\" DevicePath \"\"" Apr 23 17:58:12.658491 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:12.658448 2572 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f252e45e-4a0d-4870-83d8-8bdd0b96cbfb-util\") on node \"ip-10-0-133-178.ec2.internal\" DevicePath \"\"" Apr 23 17:58:13.334263 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:13.334231 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cgwpp5" event={"ID":"f252e45e-4a0d-4870-83d8-8bdd0b96cbfb","Type":"ContainerDied","Data":"28a8847df61227c22b29f4d20865900146db68489e4222e0d4f3faabc872c779"} Apr 23 17:58:13.334263 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:13.334262 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28a8847df61227c22b29f4d20865900146db68489e4222e0d4f3faabc872c779" Apr 23 17:58:13.334263 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:13.334265 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cgwpp5" Apr 23 17:58:21.569429 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:21.569377 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/keda-operator-ffbb595cb-fzsft"] Apr 23 17:58:21.569931 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:21.569621 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f252e45e-4a0d-4870-83d8-8bdd0b96cbfb" containerName="pull" Apr 23 17:58:21.569931 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:21.569634 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="f252e45e-4a0d-4870-83d8-8bdd0b96cbfb" containerName="pull" Apr 23 17:58:21.569931 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:21.569646 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f252e45e-4a0d-4870-83d8-8bdd0b96cbfb" containerName="util" Apr 23 17:58:21.569931 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:21.569652 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="f252e45e-4a0d-4870-83d8-8bdd0b96cbfb" containerName="util" Apr 23 17:58:21.569931 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:21.569658 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f252e45e-4a0d-4870-83d8-8bdd0b96cbfb" containerName="extract" Apr 23 17:58:21.569931 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:21.569663 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="f252e45e-4a0d-4870-83d8-8bdd0b96cbfb" containerName="extract" Apr 23 17:58:21.569931 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:21.569702 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="f252e45e-4a0d-4870-83d8-8bdd0b96cbfb" containerName="extract" Apr 23 17:58:21.621658 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:21.621627 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-operator-ffbb595cb-fzsft"] Apr 23 17:58:21.621807 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:21.621739 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-operator-ffbb595cb-fzsft" Apr 23 17:58:21.627289 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:21.627264 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"keda-ocp-cabundle\"" Apr 23 17:58:21.627289 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:21.627274 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-operator-certs\"" Apr 23 17:58:21.627847 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:21.627647 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-operator-dockercfg-ct5nf\"" Apr 23 17:58:21.628417 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:21.628387 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"kube-root-ca.crt\"" Apr 23 17:58:21.628690 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:21.628674 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"kedaorg-certs\"" Apr 23 17:58:21.628991 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:21.628967 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"openshift-service-ca.crt\"" Apr 23 17:58:21.717423 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:21.717371 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/bf16f1e3-e6f3-494d-a81a-aca6464d372b-certificates\") pod \"keda-operator-ffbb595cb-fzsft\" (UID: \"bf16f1e3-e6f3-494d-a81a-aca6464d372b\") " pod="openshift-keda/keda-operator-ffbb595cb-fzsft" Apr 23 17:58:21.717423 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:21.717423 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cabundle0\" (UniqueName: \"kubernetes.io/configmap/bf16f1e3-e6f3-494d-a81a-aca6464d372b-cabundle0\") pod \"keda-operator-ffbb595cb-fzsft\" (UID: \"bf16f1e3-e6f3-494d-a81a-aca6464d372b\") " pod="openshift-keda/keda-operator-ffbb595cb-fzsft" Apr 23 17:58:21.717620 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:21.717512 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb98j\" (UniqueName: \"kubernetes.io/projected/bf16f1e3-e6f3-494d-a81a-aca6464d372b-kube-api-access-bb98j\") pod \"keda-operator-ffbb595cb-fzsft\" (UID: \"bf16f1e3-e6f3-494d-a81a-aca6464d372b\") " pod="openshift-keda/keda-operator-ffbb595cb-fzsft" Apr 23 17:58:21.818131 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:21.818096 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bb98j\" (UniqueName: \"kubernetes.io/projected/bf16f1e3-e6f3-494d-a81a-aca6464d372b-kube-api-access-bb98j\") pod \"keda-operator-ffbb595cb-fzsft\" (UID: \"bf16f1e3-e6f3-494d-a81a-aca6464d372b\") " pod="openshift-keda/keda-operator-ffbb595cb-fzsft" Apr 23 17:58:21.818286 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:21.818143 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/bf16f1e3-e6f3-494d-a81a-aca6464d372b-certificates\") pod \"keda-operator-ffbb595cb-fzsft\" (UID: \"bf16f1e3-e6f3-494d-a81a-aca6464d372b\") " pod="openshift-keda/keda-operator-ffbb595cb-fzsft" Apr 23 17:58:21.818286 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:21.818239 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cabundle0\" (UniqueName: \"kubernetes.io/configmap/bf16f1e3-e6f3-494d-a81a-aca6464d372b-cabundle0\") pod \"keda-operator-ffbb595cb-fzsft\" (UID: \"bf16f1e3-e6f3-494d-a81a-aca6464d372b\") " pod="openshift-keda/keda-operator-ffbb595cb-fzsft" Apr 23 17:58:21.818359 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:58:21.818318 2572 secret.go:281] references non-existent secret key: ca.crt Apr 23 17:58:21.818359 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:58:21.818339 2572 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 23 17:58:21.818359 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:58:21.818349 2572 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-fzsft: references non-existent secret key: ca.crt Apr 23 17:58:21.818489 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:58:21.818417 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bf16f1e3-e6f3-494d-a81a-aca6464d372b-certificates podName:bf16f1e3-e6f3-494d-a81a-aca6464d372b nodeName:}" failed. No retries permitted until 2026-04-23 17:58:22.318383096 +0000 UTC m=+377.513287806 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/bf16f1e3-e6f3-494d-a81a-aca6464d372b-certificates") pod "keda-operator-ffbb595cb-fzsft" (UID: "bf16f1e3-e6f3-494d-a81a-aca6464d372b") : references non-existent secret key: ca.crt Apr 23 17:58:21.818801 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:21.818785 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cabundle0\" (UniqueName: \"kubernetes.io/configmap/bf16f1e3-e6f3-494d-a81a-aca6464d372b-cabundle0\") pod \"keda-operator-ffbb595cb-fzsft\" (UID: \"bf16f1e3-e6f3-494d-a81a-aca6464d372b\") " pod="openshift-keda/keda-operator-ffbb595cb-fzsft" Apr 23 17:58:21.826599 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:21.826547 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb98j\" (UniqueName: \"kubernetes.io/projected/bf16f1e3-e6f3-494d-a81a-aca6464d372b-kube-api-access-bb98j\") pod \"keda-operator-ffbb595cb-fzsft\" (UID: \"bf16f1e3-e6f3-494d-a81a-aca6464d372b\") " pod="openshift-keda/keda-operator-ffbb595cb-fzsft" Apr 23 17:58:21.874539 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:21.874511 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/keda-metrics-apiserver-7c9f485588-krkff"] Apr 23 17:58:21.900809 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:21.900778 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-metrics-apiserver-7c9f485588-krkff"] Apr 23 17:58:21.900952 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:21.900847 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-krkff" Apr 23 17:58:21.902838 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:21.902812 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-metrics-apiserver-certs\"" Apr 23 17:58:22.020265 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:22.020229 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnd6w\" (UniqueName: \"kubernetes.io/projected/2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd-kube-api-access-hnd6w\") pod \"keda-metrics-apiserver-7c9f485588-krkff\" (UID: \"2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-krkff" Apr 23 17:58:22.020265 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:22.020266 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"temp-vol\" (UniqueName: \"kubernetes.io/empty-dir/2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd-temp-vol\") pod \"keda-metrics-apiserver-7c9f485588-krkff\" (UID: \"2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-krkff" Apr 23 17:58:22.020503 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:22.020303 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd-certificates\") pod \"keda-metrics-apiserver-7c9f485588-krkff\" (UID: \"2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-krkff" Apr 23 17:58:22.120715 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:22.120628 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hnd6w\" (UniqueName: \"kubernetes.io/projected/2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd-kube-api-access-hnd6w\") pod \"keda-metrics-apiserver-7c9f485588-krkff\" (UID: \"2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-krkff" Apr 23 17:58:22.120715 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:22.120663 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"temp-vol\" (UniqueName: \"kubernetes.io/empty-dir/2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd-temp-vol\") pod \"keda-metrics-apiserver-7c9f485588-krkff\" (UID: \"2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-krkff" Apr 23 17:58:22.120715 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:22.120704 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd-certificates\") pod \"keda-metrics-apiserver-7c9f485588-krkff\" (UID: \"2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-krkff" Apr 23 17:58:22.121006 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:58:22.120834 2572 secret.go:281] references non-existent secret key: tls.crt Apr 23 17:58:22.121006 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:58:22.120851 2572 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 23 17:58:22.121006 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:58:22.120871 2572 projected.go:264] Couldn't get secret openshift-keda/keda-metrics-apiserver-certs: secret "keda-metrics-apiserver-certs" not found Apr 23 17:58:22.121006 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:58:22.120890 2572 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-krkff: [references non-existent secret key: tls.crt, secret "keda-metrics-apiserver-certs" not found] Apr 23 17:58:22.121006 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:58:22.120958 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd-certificates podName:2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd nodeName:}" failed. No retries permitted until 2026-04-23 17:58:22.620939902 +0000 UTC m=+377.815844608 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd-certificates") pod "keda-metrics-apiserver-7c9f485588-krkff" (UID: "2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd") : [references non-existent secret key: tls.crt, secret "keda-metrics-apiserver-certs" not found] Apr 23 17:58:22.121207 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:22.121084 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"temp-vol\" (UniqueName: \"kubernetes.io/empty-dir/2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd-temp-vol\") pod \"keda-metrics-apiserver-7c9f485588-krkff\" (UID: \"2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-krkff" Apr 23 17:58:22.129645 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:22.129624 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnd6w\" (UniqueName: \"kubernetes.io/projected/2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd-kube-api-access-hnd6w\") pod \"keda-metrics-apiserver-7c9f485588-krkff\" (UID: \"2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-krkff" Apr 23 17:58:22.321540 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:22.321507 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/bf16f1e3-e6f3-494d-a81a-aca6464d372b-certificates\") pod \"keda-operator-ffbb595cb-fzsft\" (UID: \"bf16f1e3-e6f3-494d-a81a-aca6464d372b\") " pod="openshift-keda/keda-operator-ffbb595cb-fzsft" Apr 23 17:58:22.321708 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:58:22.321622 2572 secret.go:281] references non-existent secret key: ca.crt Apr 23 17:58:22.321708 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:58:22.321634 2572 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 23 17:58:22.321708 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:58:22.321642 2572 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-fzsft: references non-existent secret key: ca.crt Apr 23 17:58:22.321708 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:58:22.321688 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bf16f1e3-e6f3-494d-a81a-aca6464d372b-certificates podName:bf16f1e3-e6f3-494d-a81a-aca6464d372b nodeName:}" failed. No retries permitted until 2026-04-23 17:58:23.321675555 +0000 UTC m=+378.516580260 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/bf16f1e3-e6f3-494d-a81a-aca6464d372b-certificates") pod "keda-operator-ffbb595cb-fzsft" (UID: "bf16f1e3-e6f3-494d-a81a-aca6464d372b") : references non-existent secret key: ca.crt Apr 23 17:58:22.623117 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:22.623075 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd-certificates\") pod \"keda-metrics-apiserver-7c9f485588-krkff\" (UID: \"2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-krkff" Apr 23 17:58:22.623619 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:58:22.623217 2572 secret.go:281] references non-existent secret key: tls.crt Apr 23 17:58:22.623619 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:58:22.623236 2572 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 23 17:58:22.623619 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:58:22.623259 2572 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-krkff: references non-existent secret key: tls.crt Apr 23 17:58:22.623619 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:58:22.623334 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd-certificates podName:2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd nodeName:}" failed. No retries permitted until 2026-04-23 17:58:23.623315276 +0000 UTC m=+378.818219995 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd-certificates") pod "keda-metrics-apiserver-7c9f485588-krkff" (UID: "2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd") : references non-existent secret key: tls.crt Apr 23 17:58:23.326817 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:23.326774 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/bf16f1e3-e6f3-494d-a81a-aca6464d372b-certificates\") pod \"keda-operator-ffbb595cb-fzsft\" (UID: \"bf16f1e3-e6f3-494d-a81a-aca6464d372b\") " pod="openshift-keda/keda-operator-ffbb595cb-fzsft" Apr 23 17:58:23.327003 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:58:23.326919 2572 secret.go:281] references non-existent secret key: ca.crt Apr 23 17:58:23.327003 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:58:23.326940 2572 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 23 17:58:23.327003 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:58:23.326950 2572 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-fzsft: references non-existent secret key: ca.crt Apr 23 17:58:23.327003 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:58:23.327005 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bf16f1e3-e6f3-494d-a81a-aca6464d372b-certificates podName:bf16f1e3-e6f3-494d-a81a-aca6464d372b nodeName:}" failed. No retries permitted until 2026-04-23 17:58:25.326988633 +0000 UTC m=+380.521893339 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/bf16f1e3-e6f3-494d-a81a-aca6464d372b-certificates") pod "keda-operator-ffbb595cb-fzsft" (UID: "bf16f1e3-e6f3-494d-a81a-aca6464d372b") : references non-existent secret key: ca.crt Apr 23 17:58:23.629907 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:23.629816 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd-certificates\") pod \"keda-metrics-apiserver-7c9f485588-krkff\" (UID: \"2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-krkff" Apr 23 17:58:23.630249 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:58:23.629978 2572 secret.go:281] references non-existent secret key: tls.crt Apr 23 17:58:23.630249 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:58:23.629995 2572 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 23 17:58:23.630249 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:58:23.630014 2572 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-krkff: references non-existent secret key: tls.crt Apr 23 17:58:23.630249 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:58:23.630080 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd-certificates podName:2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd nodeName:}" failed. No retries permitted until 2026-04-23 17:58:25.630065911 +0000 UTC m=+380.824970616 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd-certificates") pod "keda-metrics-apiserver-7c9f485588-krkff" (UID: "2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd") : references non-existent secret key: tls.crt Apr 23 17:58:25.342291 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:25.342240 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/bf16f1e3-e6f3-494d-a81a-aca6464d372b-certificates\") pod \"keda-operator-ffbb595cb-fzsft\" (UID: \"bf16f1e3-e6f3-494d-a81a-aca6464d372b\") " pod="openshift-keda/keda-operator-ffbb595cb-fzsft" Apr 23 17:58:25.342724 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:58:25.342423 2572 secret.go:281] references non-existent secret key: ca.crt Apr 23 17:58:25.342724 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:58:25.342442 2572 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 23 17:58:25.342724 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:58:25.342452 2572 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-fzsft: references non-existent secret key: ca.crt Apr 23 17:58:25.342724 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:58:25.342516 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bf16f1e3-e6f3-494d-a81a-aca6464d372b-certificates podName:bf16f1e3-e6f3-494d-a81a-aca6464d372b nodeName:}" failed. No retries permitted until 2026-04-23 17:58:29.342501037 +0000 UTC m=+384.537405742 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/bf16f1e3-e6f3-494d-a81a-aca6464d372b-certificates") pod "keda-operator-ffbb595cb-fzsft" (UID: "bf16f1e3-e6f3-494d-a81a-aca6464d372b") : references non-existent secret key: ca.crt Apr 23 17:58:25.644231 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:25.644146 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd-certificates\") pod \"keda-metrics-apiserver-7c9f485588-krkff\" (UID: \"2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-krkff" Apr 23 17:58:25.644374 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:58:25.644254 2572 secret.go:281] references non-existent secret key: tls.crt Apr 23 17:58:25.644374 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:58:25.644266 2572 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 23 17:58:25.644374 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:58:25.644282 2572 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-krkff: references non-existent secret key: tls.crt Apr 23 17:58:25.644374 ip-10-0-133-178 kubenswrapper[2572]: E0423 17:58:25.644341 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd-certificates podName:2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd nodeName:}" failed. No retries permitted until 2026-04-23 17:58:29.644328145 +0000 UTC m=+384.839232850 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd-certificates") pod "keda-metrics-apiserver-7c9f485588-krkff" (UID: "2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd") : references non-existent secret key: tls.crt Apr 23 17:58:29.373232 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:29.373202 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/bf16f1e3-e6f3-494d-a81a-aca6464d372b-certificates\") pod \"keda-operator-ffbb595cb-fzsft\" (UID: \"bf16f1e3-e6f3-494d-a81a-aca6464d372b\") " pod="openshift-keda/keda-operator-ffbb595cb-fzsft" Apr 23 17:58:29.375574 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:29.375553 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/bf16f1e3-e6f3-494d-a81a-aca6464d372b-certificates\") pod \"keda-operator-ffbb595cb-fzsft\" (UID: \"bf16f1e3-e6f3-494d-a81a-aca6464d372b\") " pod="openshift-keda/keda-operator-ffbb595cb-fzsft" Apr 23 17:58:29.431674 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:29.431645 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-operator-ffbb595cb-fzsft" Apr 23 17:58:29.545908 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:29.545876 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-operator-ffbb595cb-fzsft"] Apr 23 17:58:29.549424 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:58:29.549387 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf16f1e3_e6f3_494d_a81a_aca6464d372b.slice/crio-af35db61330c562e8b9c09aa360c50212d37ce51f962f79a6bcbd29547a61530 WatchSource:0}: Error finding container af35db61330c562e8b9c09aa360c50212d37ce51f962f79a6bcbd29547a61530: Status 404 returned error can't find the container with id af35db61330c562e8b9c09aa360c50212d37ce51f962f79a6bcbd29547a61530 Apr 23 17:58:29.676074 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:29.676001 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd-certificates\") pod \"keda-metrics-apiserver-7c9f485588-krkff\" (UID: \"2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-krkff" Apr 23 17:58:29.678334 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:29.678310 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd-certificates\") pod \"keda-metrics-apiserver-7c9f485588-krkff\" (UID: \"2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-krkff" Apr 23 17:58:29.710878 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:29.710853 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-krkff" Apr 23 17:58:29.823710 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:29.823675 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-metrics-apiserver-7c9f485588-krkff"] Apr 23 17:58:29.826210 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:58:29.826178 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2857a1d6_b6f3_4ea5_a1c4_865de8ceeddd.slice/crio-6ce3c99b8855a009df54ab9e995f16a019c91089dbdd2ced4e9c8ed25b556a38 WatchSource:0}: Error finding container 6ce3c99b8855a009df54ab9e995f16a019c91089dbdd2ced4e9c8ed25b556a38: Status 404 returned error can't find the container with id 6ce3c99b8855a009df54ab9e995f16a019c91089dbdd2ced4e9c8ed25b556a38 Apr 23 17:58:30.379034 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:30.378990 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-krkff" event={"ID":"2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd","Type":"ContainerStarted","Data":"6ce3c99b8855a009df54ab9e995f16a019c91089dbdd2ced4e9c8ed25b556a38"} Apr 23 17:58:30.380602 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:30.380558 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-operator-ffbb595cb-fzsft" event={"ID":"bf16f1e3-e6f3-494d-a81a-aca6464d372b","Type":"ContainerStarted","Data":"af35db61330c562e8b9c09aa360c50212d37ce51f962f79a6bcbd29547a61530"} Apr 23 17:58:33.390525 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:33.390420 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-krkff" event={"ID":"2857a1d6-b6f3-4ea5-a1c4-865de8ceeddd","Type":"ContainerStarted","Data":"8384fdd5efb79c86cda0e8b307aba4d683c59a707fba1ec584c396904f578154"} Apr 23 17:58:33.390525 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:33.390511 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-krkff" Apr 23 17:58:33.391830 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:33.391806 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-operator-ffbb595cb-fzsft" event={"ID":"bf16f1e3-e6f3-494d-a81a-aca6464d372b","Type":"ContainerStarted","Data":"99c1a33faf8280c8785ec9e07a050b8f5ca585c385078540edeac177a9d42163"} Apr 23 17:58:33.391960 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:33.391948 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-operator-ffbb595cb-fzsft" Apr 23 17:58:33.414475 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:33.414389 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-krkff" podStartSLOduration=9.208190759 podStartE2EDuration="12.414377766s" podCreationTimestamp="2026-04-23 17:58:21 +0000 UTC" firstStartedPulling="2026-04-23 17:58:29.827552735 +0000 UTC m=+385.022457440" lastFinishedPulling="2026-04-23 17:58:33.033739725 +0000 UTC m=+388.228644447" observedRunningTime="2026-04-23 17:58:33.413595065 +0000 UTC m=+388.608499793" watchObservedRunningTime="2026-04-23 17:58:33.414377766 +0000 UTC m=+388.609282489" Apr 23 17:58:33.437906 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:33.437864 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/keda-operator-ffbb595cb-fzsft" podStartSLOduration=8.949995357 podStartE2EDuration="12.43785394s" podCreationTimestamp="2026-04-23 17:58:21 +0000 UTC" firstStartedPulling="2026-04-23 17:58:29.551106365 +0000 UTC m=+384.746011071" lastFinishedPulling="2026-04-23 17:58:33.038964945 +0000 UTC m=+388.233869654" observedRunningTime="2026-04-23 17:58:33.436920777 +0000 UTC m=+388.631825503" watchObservedRunningTime="2026-04-23 17:58:33.43785394 +0000 UTC m=+388.632758667" Apr 23 17:58:44.399594 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:44.399564 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-krkff" Apr 23 17:58:54.397003 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:58:54.396971 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-operator-ffbb595cb-fzsft" Apr 23 17:59:30.973811 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:59:30.973777 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/llmisvc-controller-manager-68cc5db7c4-n9pld"] Apr 23 17:59:30.976793 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:59:30.976772 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/llmisvc-controller-manager-68cc5db7c4-n9pld" Apr 23 17:59:30.981448 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:59:30.981423 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"llmisvc-controller-manager-dockercfg-flr5b\"" Apr 23 17:59:30.981871 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:59:30.981853 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"llmisvc-webhook-server-cert\"" Apr 23 17:59:30.981974 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:59:30.981956 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"openshift-service-ca.crt\"" Apr 23 17:59:30.982067 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:59:30.982052 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"kube-root-ca.crt\"" Apr 23 17:59:30.987581 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:59:30.987561 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2c4bece2-d94d-4087-98bd-29e1c9e938fd-cert\") pod \"llmisvc-controller-manager-68cc5db7c4-n9pld\" (UID: \"2c4bece2-d94d-4087-98bd-29e1c9e938fd\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-n9pld" Apr 23 17:59:30.987690 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:59:30.987597 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl6xh\" (UniqueName: \"kubernetes.io/projected/2c4bece2-d94d-4087-98bd-29e1c9e938fd-kube-api-access-bl6xh\") pod \"llmisvc-controller-manager-68cc5db7c4-n9pld\" (UID: \"2c4bece2-d94d-4087-98bd-29e1c9e938fd\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-n9pld" Apr 23 17:59:30.993090 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:59:30.993066 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/llmisvc-controller-manager-68cc5db7c4-n9pld"] Apr 23 17:59:31.088552 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:59:31.088518 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2c4bece2-d94d-4087-98bd-29e1c9e938fd-cert\") pod \"llmisvc-controller-manager-68cc5db7c4-n9pld\" (UID: \"2c4bece2-d94d-4087-98bd-29e1c9e938fd\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-n9pld" Apr 23 17:59:31.088552 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:59:31.088556 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bl6xh\" (UniqueName: \"kubernetes.io/projected/2c4bece2-d94d-4087-98bd-29e1c9e938fd-kube-api-access-bl6xh\") pod \"llmisvc-controller-manager-68cc5db7c4-n9pld\" (UID: \"2c4bece2-d94d-4087-98bd-29e1c9e938fd\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-n9pld" Apr 23 17:59:31.091038 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:59:31.091016 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2c4bece2-d94d-4087-98bd-29e1c9e938fd-cert\") pod \"llmisvc-controller-manager-68cc5db7c4-n9pld\" (UID: \"2c4bece2-d94d-4087-98bd-29e1c9e938fd\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-n9pld" Apr 23 17:59:31.097320 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:59:31.097295 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl6xh\" (UniqueName: \"kubernetes.io/projected/2c4bece2-d94d-4087-98bd-29e1c9e938fd-kube-api-access-bl6xh\") pod \"llmisvc-controller-manager-68cc5db7c4-n9pld\" (UID: \"2c4bece2-d94d-4087-98bd-29e1c9e938fd\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-n9pld" Apr 23 17:59:31.288059 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:59:31.287964 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/llmisvc-controller-manager-68cc5db7c4-n9pld" Apr 23 17:59:31.403970 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:59:31.403939 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/llmisvc-controller-manager-68cc5db7c4-n9pld"] Apr 23 17:59:31.407391 ip-10-0-133-178 kubenswrapper[2572]: W0423 17:59:31.407363 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod2c4bece2_d94d_4087_98bd_29e1c9e938fd.slice/crio-407b86f2fbd5d31f51061869ef0c832e94da71016c92a1fd7780a19033ca6e06 WatchSource:0}: Error finding container 407b86f2fbd5d31f51061869ef0c832e94da71016c92a1fd7780a19033ca6e06: Status 404 returned error can't find the container with id 407b86f2fbd5d31f51061869ef0c832e94da71016c92a1fd7780a19033ca6e06 Apr 23 17:59:31.541973 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:59:31.541888 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/llmisvc-controller-manager-68cc5db7c4-n9pld" event={"ID":"2c4bece2-d94d-4087-98bd-29e1c9e938fd","Type":"ContainerStarted","Data":"407b86f2fbd5d31f51061869ef0c832e94da71016c92a1fd7780a19033ca6e06"} Apr 23 17:59:33.549778 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:59:33.549747 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/llmisvc-controller-manager-68cc5db7c4-n9pld" event={"ID":"2c4bece2-d94d-4087-98bd-29e1c9e938fd","Type":"ContainerStarted","Data":"491156b321b8066ca02b8221a6cb40018f41c0465d751ce9709fc4fe6a8a5ab6"} Apr 23 17:59:33.550080 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:59:33.549815 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/llmisvc-controller-manager-68cc5db7c4-n9pld" Apr 23 17:59:33.566090 ip-10-0-133-178 kubenswrapper[2572]: I0423 17:59:33.566046 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/llmisvc-controller-manager-68cc5db7c4-n9pld" podStartSLOduration=1.541716522 podStartE2EDuration="3.566033506s" podCreationTimestamp="2026-04-23 17:59:30 +0000 UTC" firstStartedPulling="2026-04-23 17:59:31.408624858 +0000 UTC m=+446.603529562" lastFinishedPulling="2026-04-23 17:59:33.432941838 +0000 UTC m=+448.627846546" observedRunningTime="2026-04-23 17:59:33.56555001 +0000 UTC m=+448.760454736" watchObservedRunningTime="2026-04-23 17:59:33.566033506 +0000 UTC m=+448.760938228" Apr 23 18:00:04.554860 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:00:04.554830 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/llmisvc-controller-manager-68cc5db7c4-n9pld" Apr 23 18:01:15.929181 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:01:15.929142 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c"] Apr 23 18:01:15.933768 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:01:15.933745 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" Apr 23 18:01:15.935881 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:01:15.935864 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"isvc-xgboost-graph-kube-rbac-proxy-sar-config\"" Apr 23 18:01:15.936194 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:01:15.936176 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"openshift-service-ca.crt\"" Apr 23 18:01:15.936292 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:01:15.936195 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"kube-root-ca.crt\"" Apr 23 18:01:15.936650 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:01:15.936635 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"isvc-xgboost-graph-predictor-serving-cert\"" Apr 23 18:01:15.936702 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:01:15.936676 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"default-dockercfg-jv9tx\"" Apr 23 18:01:15.942871 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:01:15.942852 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c"] Apr 23 18:01:15.975878 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:01:15.975846 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/9742f92c-6d03-44d2-892a-06e0d31fab64-kserve-provision-location\") pod \"isvc-xgboost-graph-predictor-669d8d6456-rzx5c\" (UID: \"9742f92c-6d03-44d2-892a-06e0d31fab64\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" Apr 23 18:01:15.976049 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:01:15.975899 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9742f92c-6d03-44d2-892a-06e0d31fab64-proxy-tls\") pod \"isvc-xgboost-graph-predictor-669d8d6456-rzx5c\" (UID: \"9742f92c-6d03-44d2-892a-06e0d31fab64\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" Apr 23 18:01:15.976049 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:01:15.975993 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d626f\" (UniqueName: \"kubernetes.io/projected/9742f92c-6d03-44d2-892a-06e0d31fab64-kube-api-access-d626f\") pod \"isvc-xgboost-graph-predictor-669d8d6456-rzx5c\" (UID: \"9742f92c-6d03-44d2-892a-06e0d31fab64\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" Apr 23 18:01:15.976162 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:01:15.976074 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"isvc-xgboost-graph-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/9742f92c-6d03-44d2-892a-06e0d31fab64-isvc-xgboost-graph-kube-rbac-proxy-sar-config\") pod \"isvc-xgboost-graph-predictor-669d8d6456-rzx5c\" (UID: \"9742f92c-6d03-44d2-892a-06e0d31fab64\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" Apr 23 18:01:16.077292 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:01:16.077256 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9742f92c-6d03-44d2-892a-06e0d31fab64-proxy-tls\") pod \"isvc-xgboost-graph-predictor-669d8d6456-rzx5c\" (UID: \"9742f92c-6d03-44d2-892a-06e0d31fab64\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" Apr 23 18:01:16.077492 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:01:16.077299 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d626f\" (UniqueName: \"kubernetes.io/projected/9742f92c-6d03-44d2-892a-06e0d31fab64-kube-api-access-d626f\") pod \"isvc-xgboost-graph-predictor-669d8d6456-rzx5c\" (UID: \"9742f92c-6d03-44d2-892a-06e0d31fab64\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" Apr 23 18:01:16.077492 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:01:16.077335 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"isvc-xgboost-graph-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/9742f92c-6d03-44d2-892a-06e0d31fab64-isvc-xgboost-graph-kube-rbac-proxy-sar-config\") pod \"isvc-xgboost-graph-predictor-669d8d6456-rzx5c\" (UID: \"9742f92c-6d03-44d2-892a-06e0d31fab64\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" Apr 23 18:01:16.077492 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:01:16.077361 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/9742f92c-6d03-44d2-892a-06e0d31fab64-kserve-provision-location\") pod \"isvc-xgboost-graph-predictor-669d8d6456-rzx5c\" (UID: \"9742f92c-6d03-44d2-892a-06e0d31fab64\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" Apr 23 18:01:16.077492 ip-10-0-133-178 kubenswrapper[2572]: E0423 18:01:16.077456 2572 secret.go:189] Couldn't get secret kserve-ci-e2e-test/isvc-xgboost-graph-predictor-serving-cert: secret "isvc-xgboost-graph-predictor-serving-cert" not found Apr 23 18:01:16.077720 ip-10-0-133-178 kubenswrapper[2572]: E0423 18:01:16.077557 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9742f92c-6d03-44d2-892a-06e0d31fab64-proxy-tls podName:9742f92c-6d03-44d2-892a-06e0d31fab64 nodeName:}" failed. No retries permitted until 2026-04-23 18:01:16.577521765 +0000 UTC m=+551.772426479 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/9742f92c-6d03-44d2-892a-06e0d31fab64-proxy-tls") pod "isvc-xgboost-graph-predictor-669d8d6456-rzx5c" (UID: "9742f92c-6d03-44d2-892a-06e0d31fab64") : secret "isvc-xgboost-graph-predictor-serving-cert" not found Apr 23 18:01:16.077720 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:01:16.077707 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/9742f92c-6d03-44d2-892a-06e0d31fab64-kserve-provision-location\") pod \"isvc-xgboost-graph-predictor-669d8d6456-rzx5c\" (UID: \"9742f92c-6d03-44d2-892a-06e0d31fab64\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" Apr 23 18:01:16.078021 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:01:16.078002 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"isvc-xgboost-graph-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/9742f92c-6d03-44d2-892a-06e0d31fab64-isvc-xgboost-graph-kube-rbac-proxy-sar-config\") pod \"isvc-xgboost-graph-predictor-669d8d6456-rzx5c\" (UID: \"9742f92c-6d03-44d2-892a-06e0d31fab64\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" Apr 23 18:01:16.085853 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:01:16.085834 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d626f\" (UniqueName: \"kubernetes.io/projected/9742f92c-6d03-44d2-892a-06e0d31fab64-kube-api-access-d626f\") pod \"isvc-xgboost-graph-predictor-669d8d6456-rzx5c\" (UID: \"9742f92c-6d03-44d2-892a-06e0d31fab64\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" Apr 23 18:01:16.581027 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:01:16.580996 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9742f92c-6d03-44d2-892a-06e0d31fab64-proxy-tls\") pod \"isvc-xgboost-graph-predictor-669d8d6456-rzx5c\" (UID: \"9742f92c-6d03-44d2-892a-06e0d31fab64\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" Apr 23 18:01:16.583328 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:01:16.583300 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9742f92c-6d03-44d2-892a-06e0d31fab64-proxy-tls\") pod \"isvc-xgboost-graph-predictor-669d8d6456-rzx5c\" (UID: \"9742f92c-6d03-44d2-892a-06e0d31fab64\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" Apr 23 18:01:16.844742 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:01:16.844658 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" Apr 23 18:01:16.959271 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:01:16.959235 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c"] Apr 23 18:01:16.962946 ip-10-0-133-178 kubenswrapper[2572]: W0423 18:01:16.962920 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9742f92c_6d03_44d2_892a_06e0d31fab64.slice/crio-f0e8e969da609dcf578abc24ade4ba399212fe0a02c1bea4ea1f2f3fb2d89df6 WatchSource:0}: Error finding container f0e8e969da609dcf578abc24ade4ba399212fe0a02c1bea4ea1f2f3fb2d89df6: Status 404 returned error can't find the container with id f0e8e969da609dcf578abc24ade4ba399212fe0a02c1bea4ea1f2f3fb2d89df6 Apr 23 18:01:17.819515 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:01:17.819480 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" event={"ID":"9742f92c-6d03-44d2-892a-06e0d31fab64","Type":"ContainerStarted","Data":"f0e8e969da609dcf578abc24ade4ba399212fe0a02c1bea4ea1f2f3fb2d89df6"} Apr 23 18:01:21.833026 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:01:21.832984 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" event={"ID":"9742f92c-6d03-44d2-892a-06e0d31fab64","Type":"ContainerStarted","Data":"1d66012276304452a7699f6b6baf6a59ebcb3d74461f39ac42f1ce6986f9721f"} Apr 23 18:01:24.841300 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:01:24.841267 2572 generic.go:358] "Generic (PLEG): container finished" podID="9742f92c-6d03-44d2-892a-06e0d31fab64" containerID="1d66012276304452a7699f6b6baf6a59ebcb3d74461f39ac42f1ce6986f9721f" exitCode=0 Apr 23 18:01:24.841693 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:01:24.841335 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" event={"ID":"9742f92c-6d03-44d2-892a-06e0d31fab64","Type":"ContainerDied","Data":"1d66012276304452a7699f6b6baf6a59ebcb3d74461f39ac42f1ce6986f9721f"} Apr 23 18:01:41.897450 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:01:41.897384 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" event={"ID":"9742f92c-6d03-44d2-892a-06e0d31fab64","Type":"ContainerStarted","Data":"11ab4d82e15beddd3ebd63914ba52d701d210838b6872a7f01df1ba751376a1b"} Apr 23 18:01:43.905349 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:01:43.905310 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" event={"ID":"9742f92c-6d03-44d2-892a-06e0d31fab64","Type":"ContainerStarted","Data":"9f9f67ac07452cb29d85eb67e2bfc5ab0300d7bf065a230afc7e94d8f5e5d9d8"} Apr 23 18:01:43.905825 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:01:43.905581 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" Apr 23 18:01:44.910231 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:01:44.910194 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" Apr 23 18:01:44.911282 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:01:44.911252 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" podUID="9742f92c-6d03-44d2-892a-06e0d31fab64" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.17:8080: connect: connection refused" Apr 23 18:01:45.913139 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:01:45.913100 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" podUID="9742f92c-6d03-44d2-892a-06e0d31fab64" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.17:8080: connect: connection refused" Apr 23 18:01:50.917682 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:01:50.917654 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" Apr 23 18:01:50.918243 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:01:50.918217 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" podUID="9742f92c-6d03-44d2-892a-06e0d31fab64" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.17:8080: connect: connection refused" Apr 23 18:01:50.936747 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:01:50.936690 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" podStartSLOduration=9.596800791 podStartE2EDuration="35.936675355s" podCreationTimestamp="2026-04-23 18:01:15 +0000 UTC" firstStartedPulling="2026-04-23 18:01:16.965129561 +0000 UTC m=+552.160034266" lastFinishedPulling="2026-04-23 18:01:43.305004124 +0000 UTC m=+578.499908830" observedRunningTime="2026-04-23 18:01:43.927185444 +0000 UTC m=+579.122090177" watchObservedRunningTime="2026-04-23 18:01:50.936675355 +0000 UTC m=+586.131580082" Apr 23 18:02:00.918990 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:02:00.918948 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" podUID="9742f92c-6d03-44d2-892a-06e0d31fab64" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.17:8080: connect: connection refused" Apr 23 18:02:05.344228 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:02:05.344203 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9pcc_3c2da17f-0591-4850-9fa2-fde2a8c1a8d5/ovn-acl-logging/0.log" Apr 23 18:02:05.344599 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:02:05.344550 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9pcc_3c2da17f-0591-4850-9fa2-fde2a8c1a8d5/ovn-acl-logging/0.log" Apr 23 18:02:05.348208 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:02:05.348187 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_2e27a1d033408744b4b8c34c52f01b43/kube-rbac-proxy-crio/4.log" Apr 23 18:02:05.348334 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:02:05.348240 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_2e27a1d033408744b4b8c34c52f01b43/kube-rbac-proxy-crio/4.log" Apr 23 18:02:10.918457 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:02:10.918394 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" podUID="9742f92c-6d03-44d2-892a-06e0d31fab64" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.17:8080: connect: connection refused" Apr 23 18:02:20.919252 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:02:20.919165 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" podUID="9742f92c-6d03-44d2-892a-06e0d31fab64" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.17:8080: connect: connection refused" Apr 23 18:02:30.919011 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:02:30.918960 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" podUID="9742f92c-6d03-44d2-892a-06e0d31fab64" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.17:8080: connect: connection refused" Apr 23 18:02:35.542600 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:02:35.542568 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/switch-graph-e7d75-776949f8dd-kqj2m"] Apr 23 18:02:35.545520 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:02:35.545501 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-e7d75-776949f8dd-kqj2m" Apr 23 18:02:35.547577 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:02:35.547555 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"switch-graph-e7d75-serving-cert\"" Apr 23 18:02:35.547691 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:02:35.547615 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"switch-graph-e7d75-kube-rbac-proxy-sar-config\"" Apr 23 18:02:35.553772 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:02:35.553749 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/switch-graph-e7d75-776949f8dd-kqj2m"] Apr 23 18:02:35.666181 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:02:35.666149 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/413f510b-b6a9-44b1-8428-3a0e318ea683-proxy-tls\") pod \"switch-graph-e7d75-776949f8dd-kqj2m\" (UID: \"413f510b-b6a9-44b1-8428-3a0e318ea683\") " pod="kserve-ci-e2e-test/switch-graph-e7d75-776949f8dd-kqj2m" Apr 23 18:02:35.666181 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:02:35.666185 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/413f510b-b6a9-44b1-8428-3a0e318ea683-openshift-service-ca-bundle\") pod \"switch-graph-e7d75-776949f8dd-kqj2m\" (UID: \"413f510b-b6a9-44b1-8428-3a0e318ea683\") " pod="kserve-ci-e2e-test/switch-graph-e7d75-776949f8dd-kqj2m" Apr 23 18:02:35.767415 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:02:35.767378 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/413f510b-b6a9-44b1-8428-3a0e318ea683-proxy-tls\") pod \"switch-graph-e7d75-776949f8dd-kqj2m\" (UID: \"413f510b-b6a9-44b1-8428-3a0e318ea683\") " pod="kserve-ci-e2e-test/switch-graph-e7d75-776949f8dd-kqj2m" Apr 23 18:02:35.767565 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:02:35.767450 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/413f510b-b6a9-44b1-8428-3a0e318ea683-openshift-service-ca-bundle\") pod \"switch-graph-e7d75-776949f8dd-kqj2m\" (UID: \"413f510b-b6a9-44b1-8428-3a0e318ea683\") " pod="kserve-ci-e2e-test/switch-graph-e7d75-776949f8dd-kqj2m" Apr 23 18:02:35.768228 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:02:35.768200 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/413f510b-b6a9-44b1-8428-3a0e318ea683-openshift-service-ca-bundle\") pod \"switch-graph-e7d75-776949f8dd-kqj2m\" (UID: \"413f510b-b6a9-44b1-8428-3a0e318ea683\") " pod="kserve-ci-e2e-test/switch-graph-e7d75-776949f8dd-kqj2m" Apr 23 18:02:35.769814 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:02:35.769797 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/413f510b-b6a9-44b1-8428-3a0e318ea683-proxy-tls\") pod \"switch-graph-e7d75-776949f8dd-kqj2m\" (UID: \"413f510b-b6a9-44b1-8428-3a0e318ea683\") " pod="kserve-ci-e2e-test/switch-graph-e7d75-776949f8dd-kqj2m" Apr 23 18:02:35.855785 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:02:35.855672 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-e7d75-776949f8dd-kqj2m" Apr 23 18:02:35.974366 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:02:35.974334 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/switch-graph-e7d75-776949f8dd-kqj2m"] Apr 23 18:02:35.977563 ip-10-0-133-178 kubenswrapper[2572]: W0423 18:02:35.977538 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod413f510b_b6a9_44b1_8428_3a0e318ea683.slice/crio-ac89a45d167665e4df7f83008d524795f6e5acd1e6429cfad9596b49227dd3b7 WatchSource:0}: Error finding container ac89a45d167665e4df7f83008d524795f6e5acd1e6429cfad9596b49227dd3b7: Status 404 returned error can't find the container with id ac89a45d167665e4df7f83008d524795f6e5acd1e6429cfad9596b49227dd3b7 Apr 23 18:02:36.049372 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:02:36.049344 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-e7d75-776949f8dd-kqj2m" event={"ID":"413f510b-b6a9-44b1-8428-3a0e318ea683","Type":"ContainerStarted","Data":"ac89a45d167665e4df7f83008d524795f6e5acd1e6429cfad9596b49227dd3b7"} Apr 23 18:02:39.058639 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:02:39.058595 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-e7d75-776949f8dd-kqj2m" event={"ID":"413f510b-b6a9-44b1-8428-3a0e318ea683","Type":"ContainerStarted","Data":"5911044e11c587c950f54bf8b21de128c617214195d08da6494b4acf769e4061"} Apr 23 18:02:39.059066 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:02:39.058783 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/switch-graph-e7d75-776949f8dd-kqj2m" Apr 23 18:02:39.078802 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:02:39.078721 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/switch-graph-e7d75-776949f8dd-kqj2m" podStartSLOduration=1.651917857 podStartE2EDuration="4.078704918s" podCreationTimestamp="2026-04-23 18:02:35 +0000 UTC" firstStartedPulling="2026-04-23 18:02:35.979454354 +0000 UTC m=+631.174359060" lastFinishedPulling="2026-04-23 18:02:38.40624141 +0000 UTC m=+633.601146121" observedRunningTime="2026-04-23 18:02:39.076992698 +0000 UTC m=+634.271897435" watchObservedRunningTime="2026-04-23 18:02:39.078704918 +0000 UTC m=+634.273609644" Apr 23 18:02:40.919737 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:02:40.919704 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" Apr 23 18:02:45.066770 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:02:45.066743 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/switch-graph-e7d75-776949f8dd-kqj2m" Apr 23 18:02:45.720228 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:02:45.720194 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/switch-graph-e7d75-776949f8dd-kqj2m"] Apr 23 18:02:45.720423 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:02:45.720386 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/switch-graph-e7d75-776949f8dd-kqj2m" podUID="413f510b-b6a9-44b1-8428-3a0e318ea683" containerName="switch-graph-e7d75" containerID="cri-o://5911044e11c587c950f54bf8b21de128c617214195d08da6494b4acf769e4061" gracePeriod=30 Apr 23 18:02:50.064996 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:02:50.064956 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-e7d75-776949f8dd-kqj2m" podUID="413f510b-b6a9-44b1-8428-3a0e318ea683" containerName="switch-graph-e7d75" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:02:55.065050 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:02:55.065014 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-e7d75-776949f8dd-kqj2m" podUID="413f510b-b6a9-44b1-8428-3a0e318ea683" containerName="switch-graph-e7d75" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:03:00.066464 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:00.066420 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-e7d75-776949f8dd-kqj2m" podUID="413f510b-b6a9-44b1-8428-3a0e318ea683" containerName="switch-graph-e7d75" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:03:00.066989 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:00.066534 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/switch-graph-e7d75-776949f8dd-kqj2m" Apr 23 18:03:05.065552 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:05.065511 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-e7d75-776949f8dd-kqj2m" podUID="413f510b-b6a9-44b1-8428-3a0e318ea683" containerName="switch-graph-e7d75" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:03:10.064884 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:10.064828 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-e7d75-776949f8dd-kqj2m" podUID="413f510b-b6a9-44b1-8428-3a0e318ea683" containerName="switch-graph-e7d75" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:03:15.065390 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:15.065343 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-e7d75-776949f8dd-kqj2m" podUID="413f510b-b6a9-44b1-8428-3a0e318ea683" containerName="switch-graph-e7d75" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:03:15.572943 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:15.572903 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/model-chainer-5894656ff9-2kqdb"] Apr 23 18:03:15.578470 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:15.578448 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-5894656ff9-2kqdb" Apr 23 18:03:15.580613 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:15.580593 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"model-chainer-kube-rbac-proxy-sar-config\"" Apr 23 18:03:15.580613 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:15.580606 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"model-chainer-serving-cert\"" Apr 23 18:03:15.583742 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:15.583720 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/model-chainer-5894656ff9-2kqdb"] Apr 23 18:03:15.652350 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:15.652324 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/58386c59-d272-43c1-b6bc-ad193ef4461e-proxy-tls\") pod \"model-chainer-5894656ff9-2kqdb\" (UID: \"58386c59-d272-43c1-b6bc-ad193ef4461e\") " pod="kserve-ci-e2e-test/model-chainer-5894656ff9-2kqdb" Apr 23 18:03:15.652478 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:15.652384 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58386c59-d272-43c1-b6bc-ad193ef4461e-openshift-service-ca-bundle\") pod \"model-chainer-5894656ff9-2kqdb\" (UID: \"58386c59-d272-43c1-b6bc-ad193ef4461e\") " pod="kserve-ci-e2e-test/model-chainer-5894656ff9-2kqdb" Apr 23 18:03:15.745385 ip-10-0-133-178 kubenswrapper[2572]: E0423 18:03:15.745360 2572 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod413f510b_b6a9_44b1_8428_3a0e318ea683.slice/crio-5911044e11c587c950f54bf8b21de128c617214195d08da6494b4acf769e4061.scope\": RecentStats: unable to find data in memory cache]" Apr 23 18:03:15.745385 ip-10-0-133-178 kubenswrapper[2572]: E0423 18:03:15.745375 2572 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod413f510b_b6a9_44b1_8428_3a0e318ea683.slice/crio-conmon-5911044e11c587c950f54bf8b21de128c617214195d08da6494b4acf769e4061.scope\": RecentStats: unable to find data in memory cache]" Apr 23 18:03:15.753136 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:15.753112 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58386c59-d272-43c1-b6bc-ad193ef4461e-openshift-service-ca-bundle\") pod \"model-chainer-5894656ff9-2kqdb\" (UID: \"58386c59-d272-43c1-b6bc-ad193ef4461e\") " pod="kserve-ci-e2e-test/model-chainer-5894656ff9-2kqdb" Apr 23 18:03:15.753231 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:15.753156 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/58386c59-d272-43c1-b6bc-ad193ef4461e-proxy-tls\") pod \"model-chainer-5894656ff9-2kqdb\" (UID: \"58386c59-d272-43c1-b6bc-ad193ef4461e\") " pod="kserve-ci-e2e-test/model-chainer-5894656ff9-2kqdb" Apr 23 18:03:15.753276 ip-10-0-133-178 kubenswrapper[2572]: E0423 18:03:15.753256 2572 secret.go:189] Couldn't get secret kserve-ci-e2e-test/model-chainer-serving-cert: secret "model-chainer-serving-cert" not found Apr 23 18:03:15.753326 ip-10-0-133-178 kubenswrapper[2572]: E0423 18:03:15.753315 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58386c59-d272-43c1-b6bc-ad193ef4461e-proxy-tls podName:58386c59-d272-43c1-b6bc-ad193ef4461e nodeName:}" failed. No retries permitted until 2026-04-23 18:03:16.253296831 +0000 UTC m=+671.448201541 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/58386c59-d272-43c1-b6bc-ad193ef4461e-proxy-tls") pod "model-chainer-5894656ff9-2kqdb" (UID: "58386c59-d272-43c1-b6bc-ad193ef4461e") : secret "model-chainer-serving-cert" not found Apr 23 18:03:15.753744 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:15.753725 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58386c59-d272-43c1-b6bc-ad193ef4461e-openshift-service-ca-bundle\") pod \"model-chainer-5894656ff9-2kqdb\" (UID: \"58386c59-d272-43c1-b6bc-ad193ef4461e\") " pod="kserve-ci-e2e-test/model-chainer-5894656ff9-2kqdb" Apr 23 18:03:15.856500 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:15.856475 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-e7d75-776949f8dd-kqj2m" Apr 23 18:03:15.954690 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:15.954660 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/413f510b-b6a9-44b1-8428-3a0e318ea683-proxy-tls\") pod \"413f510b-b6a9-44b1-8428-3a0e318ea683\" (UID: \"413f510b-b6a9-44b1-8428-3a0e318ea683\") " Apr 23 18:03:15.954690 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:15.954704 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/413f510b-b6a9-44b1-8428-3a0e318ea683-openshift-service-ca-bundle\") pod \"413f510b-b6a9-44b1-8428-3a0e318ea683\" (UID: \"413f510b-b6a9-44b1-8428-3a0e318ea683\") " Apr 23 18:03:15.955082 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:15.955058 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/413f510b-b6a9-44b1-8428-3a0e318ea683-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "413f510b-b6a9-44b1-8428-3a0e318ea683" (UID: "413f510b-b6a9-44b1-8428-3a0e318ea683"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:03:15.956800 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:15.956777 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/413f510b-b6a9-44b1-8428-3a0e318ea683-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "413f510b-b6a9-44b1-8428-3a0e318ea683" (UID: "413f510b-b6a9-44b1-8428-3a0e318ea683"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:03:16.055952 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:16.055920 2572 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/413f510b-b6a9-44b1-8428-3a0e318ea683-proxy-tls\") on node \"ip-10-0-133-178.ec2.internal\" DevicePath \"\"" Apr 23 18:03:16.055952 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:16.055947 2572 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/413f510b-b6a9-44b1-8428-3a0e318ea683-openshift-service-ca-bundle\") on node \"ip-10-0-133-178.ec2.internal\" DevicePath \"\"" Apr 23 18:03:16.159135 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:16.159048 2572 generic.go:358] "Generic (PLEG): container finished" podID="413f510b-b6a9-44b1-8428-3a0e318ea683" containerID="5911044e11c587c950f54bf8b21de128c617214195d08da6494b4acf769e4061" exitCode=0 Apr 23 18:03:16.159135 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:16.159103 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-e7d75-776949f8dd-kqj2m" event={"ID":"413f510b-b6a9-44b1-8428-3a0e318ea683","Type":"ContainerDied","Data":"5911044e11c587c950f54bf8b21de128c617214195d08da6494b4acf769e4061"} Apr 23 18:03:16.159135 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:16.159115 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-e7d75-776949f8dd-kqj2m" Apr 23 18:03:16.159135 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:16.159128 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-e7d75-776949f8dd-kqj2m" event={"ID":"413f510b-b6a9-44b1-8428-3a0e318ea683","Type":"ContainerDied","Data":"ac89a45d167665e4df7f83008d524795f6e5acd1e6429cfad9596b49227dd3b7"} Apr 23 18:03:16.159761 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:16.159142 2572 scope.go:117] "RemoveContainer" containerID="5911044e11c587c950f54bf8b21de128c617214195d08da6494b4acf769e4061" Apr 23 18:03:16.167167 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:16.167134 2572 scope.go:117] "RemoveContainer" containerID="5911044e11c587c950f54bf8b21de128c617214195d08da6494b4acf769e4061" Apr 23 18:03:16.167423 ip-10-0-133-178 kubenswrapper[2572]: E0423 18:03:16.167386 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5911044e11c587c950f54bf8b21de128c617214195d08da6494b4acf769e4061\": container with ID starting with 5911044e11c587c950f54bf8b21de128c617214195d08da6494b4acf769e4061 not found: ID does not exist" containerID="5911044e11c587c950f54bf8b21de128c617214195d08da6494b4acf769e4061" Apr 23 18:03:16.167517 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:16.167431 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5911044e11c587c950f54bf8b21de128c617214195d08da6494b4acf769e4061"} err="failed to get container status \"5911044e11c587c950f54bf8b21de128c617214195d08da6494b4acf769e4061\": rpc error: code = NotFound desc = could not find container \"5911044e11c587c950f54bf8b21de128c617214195d08da6494b4acf769e4061\": container with ID starting with 5911044e11c587c950f54bf8b21de128c617214195d08da6494b4acf769e4061 not found: ID does not exist" Apr 23 18:03:16.180104 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:16.180076 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/switch-graph-e7d75-776949f8dd-kqj2m"] Apr 23 18:03:16.181344 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:16.181323 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/switch-graph-e7d75-776949f8dd-kqj2m"] Apr 23 18:03:16.256804 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:16.256769 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/58386c59-d272-43c1-b6bc-ad193ef4461e-proxy-tls\") pod \"model-chainer-5894656ff9-2kqdb\" (UID: \"58386c59-d272-43c1-b6bc-ad193ef4461e\") " pod="kserve-ci-e2e-test/model-chainer-5894656ff9-2kqdb" Apr 23 18:03:16.259134 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:16.259112 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/58386c59-d272-43c1-b6bc-ad193ef4461e-proxy-tls\") pod \"model-chainer-5894656ff9-2kqdb\" (UID: \"58386c59-d272-43c1-b6bc-ad193ef4461e\") " pod="kserve-ci-e2e-test/model-chainer-5894656ff9-2kqdb" Apr 23 18:03:16.488356 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:16.488330 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-5894656ff9-2kqdb" Apr 23 18:03:16.603362 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:16.603327 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/model-chainer-5894656ff9-2kqdb"] Apr 23 18:03:16.606274 ip-10-0-133-178 kubenswrapper[2572]: W0423 18:03:16.606244 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58386c59_d272_43c1_b6bc_ad193ef4461e.slice/crio-ab959e01dc61ed02111286caa49e2f9e86e4252d509d83c3354a31807e51a74f WatchSource:0}: Error finding container ab959e01dc61ed02111286caa49e2f9e86e4252d509d83c3354a31807e51a74f: Status 404 returned error can't find the container with id ab959e01dc61ed02111286caa49e2f9e86e4252d509d83c3354a31807e51a74f Apr 23 18:03:16.608078 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:16.608063 2572 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 18:03:17.163820 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:17.163785 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-5894656ff9-2kqdb" event={"ID":"58386c59-d272-43c1-b6bc-ad193ef4461e","Type":"ContainerStarted","Data":"cfb157e98e95ad60382ec5f13d054f26e48922b7a9713914c452d7a97f3af3b5"} Apr 23 18:03:17.163820 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:17.163819 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-5894656ff9-2kqdb" event={"ID":"58386c59-d272-43c1-b6bc-ad193ef4461e","Type":"ContainerStarted","Data":"ab959e01dc61ed02111286caa49e2f9e86e4252d509d83c3354a31807e51a74f"} Apr 23 18:03:17.164323 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:17.163933 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/model-chainer-5894656ff9-2kqdb" Apr 23 18:03:17.182946 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:17.182896 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/model-chainer-5894656ff9-2kqdb" podStartSLOduration=2.18288457 podStartE2EDuration="2.18288457s" podCreationTimestamp="2026-04-23 18:03:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:03:17.181431776 +0000 UTC m=+672.376336504" watchObservedRunningTime="2026-04-23 18:03:17.18288457 +0000 UTC m=+672.377789294" Apr 23 18:03:17.417569 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:17.417492 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="413f510b-b6a9-44b1-8428-3a0e318ea683" path="/var/lib/kubelet/pods/413f510b-b6a9-44b1-8428-3a0e318ea683/volumes" Apr 23 18:03:23.172140 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:23.172111 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/model-chainer-5894656ff9-2kqdb" Apr 23 18:03:25.663233 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:25.663198 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/model-chainer-5894656ff9-2kqdb"] Apr 23 18:03:25.663695 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:25.663477 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/model-chainer-5894656ff9-2kqdb" podUID="58386c59-d272-43c1-b6bc-ad193ef4461e" containerName="model-chainer" containerID="cri-o://cfb157e98e95ad60382ec5f13d054f26e48922b7a9713914c452d7a97f3af3b5" gracePeriod=30 Apr 23 18:03:25.746315 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:25.746282 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c"] Apr 23 18:03:25.746695 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:25.746637 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" podUID="9742f92c-6d03-44d2-892a-06e0d31fab64" containerName="kube-rbac-proxy" containerID="cri-o://9f9f67ac07452cb29d85eb67e2bfc5ab0300d7bf065a230afc7e94d8f5e5d9d8" gracePeriod=30 Apr 23 18:03:25.746808 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:25.746711 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" podUID="9742f92c-6d03-44d2-892a-06e0d31fab64" containerName="kserve-container" containerID="cri-o://11ab4d82e15beddd3ebd63914ba52d701d210838b6872a7f01df1ba751376a1b" gracePeriod=30 Apr 23 18:03:25.913864 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:25.913773 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" podUID="9742f92c-6d03-44d2-892a-06e0d31fab64" containerName="kube-rbac-proxy" probeResult="failure" output="Get \"https://10.132.0.17:8643/healthz\": dial tcp 10.132.0.17:8643: connect: connection refused" Apr 23 18:03:26.191087 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:26.191003 2572 generic.go:358] "Generic (PLEG): container finished" podID="9742f92c-6d03-44d2-892a-06e0d31fab64" containerID="9f9f67ac07452cb29d85eb67e2bfc5ab0300d7bf065a230afc7e94d8f5e5d9d8" exitCode=2 Apr 23 18:03:26.191232 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:26.191078 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" event={"ID":"9742f92c-6d03-44d2-892a-06e0d31fab64","Type":"ContainerDied","Data":"9f9f67ac07452cb29d85eb67e2bfc5ab0300d7bf065a230afc7e94d8f5e5d9d8"} Apr 23 18:03:28.171388 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:28.171353 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-5894656ff9-2kqdb" podUID="58386c59-d272-43c1-b6bc-ad193ef4461e" containerName="model-chainer" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:03:29.203756 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:29.203724 2572 generic.go:358] "Generic (PLEG): container finished" podID="9742f92c-6d03-44d2-892a-06e0d31fab64" containerID="11ab4d82e15beddd3ebd63914ba52d701d210838b6872a7f01df1ba751376a1b" exitCode=0 Apr 23 18:03:29.204088 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:29.203804 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" event={"ID":"9742f92c-6d03-44d2-892a-06e0d31fab64","Type":"ContainerDied","Data":"11ab4d82e15beddd3ebd63914ba52d701d210838b6872a7f01df1ba751376a1b"} Apr 23 18:03:29.283291 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:29.283270 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" Apr 23 18:03:29.353811 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:29.353780 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"isvc-xgboost-graph-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/9742f92c-6d03-44d2-892a-06e0d31fab64-isvc-xgboost-graph-kube-rbac-proxy-sar-config\") pod \"9742f92c-6d03-44d2-892a-06e0d31fab64\" (UID: \"9742f92c-6d03-44d2-892a-06e0d31fab64\") " Apr 23 18:03:29.354000 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:29.353840 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d626f\" (UniqueName: \"kubernetes.io/projected/9742f92c-6d03-44d2-892a-06e0d31fab64-kube-api-access-d626f\") pod \"9742f92c-6d03-44d2-892a-06e0d31fab64\" (UID: \"9742f92c-6d03-44d2-892a-06e0d31fab64\") " Apr 23 18:03:29.354000 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:29.353868 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9742f92c-6d03-44d2-892a-06e0d31fab64-proxy-tls\") pod \"9742f92c-6d03-44d2-892a-06e0d31fab64\" (UID: \"9742f92c-6d03-44d2-892a-06e0d31fab64\") " Apr 23 18:03:29.354000 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:29.353923 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/9742f92c-6d03-44d2-892a-06e0d31fab64-kserve-provision-location\") pod \"9742f92c-6d03-44d2-892a-06e0d31fab64\" (UID: \"9742f92c-6d03-44d2-892a-06e0d31fab64\") " Apr 23 18:03:29.354309 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:29.354278 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9742f92c-6d03-44d2-892a-06e0d31fab64-isvc-xgboost-graph-kube-rbac-proxy-sar-config" (OuterVolumeSpecName: "isvc-xgboost-graph-kube-rbac-proxy-sar-config") pod "9742f92c-6d03-44d2-892a-06e0d31fab64" (UID: "9742f92c-6d03-44d2-892a-06e0d31fab64"). InnerVolumeSpecName "isvc-xgboost-graph-kube-rbac-proxy-sar-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:03:29.354309 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:29.354292 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9742f92c-6d03-44d2-892a-06e0d31fab64-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "9742f92c-6d03-44d2-892a-06e0d31fab64" (UID: "9742f92c-6d03-44d2-892a-06e0d31fab64"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 23 18:03:29.355909 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:29.355885 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9742f92c-6d03-44d2-892a-06e0d31fab64-kube-api-access-d626f" (OuterVolumeSpecName: "kube-api-access-d626f") pod "9742f92c-6d03-44d2-892a-06e0d31fab64" (UID: "9742f92c-6d03-44d2-892a-06e0d31fab64"). InnerVolumeSpecName "kube-api-access-d626f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:03:29.355909 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:29.355895 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9742f92c-6d03-44d2-892a-06e0d31fab64-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "9742f92c-6d03-44d2-892a-06e0d31fab64" (UID: "9742f92c-6d03-44d2-892a-06e0d31fab64"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:03:29.455113 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:29.455089 2572 reconciler_common.go:299] "Volume detached for volume \"isvc-xgboost-graph-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/9742f92c-6d03-44d2-892a-06e0d31fab64-isvc-xgboost-graph-kube-rbac-proxy-sar-config\") on node \"ip-10-0-133-178.ec2.internal\" DevicePath \"\"" Apr 23 18:03:29.455113 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:29.455113 2572 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d626f\" (UniqueName: \"kubernetes.io/projected/9742f92c-6d03-44d2-892a-06e0d31fab64-kube-api-access-d626f\") on node \"ip-10-0-133-178.ec2.internal\" DevicePath \"\"" Apr 23 18:03:29.455270 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:29.455123 2572 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9742f92c-6d03-44d2-892a-06e0d31fab64-proxy-tls\") on node \"ip-10-0-133-178.ec2.internal\" DevicePath \"\"" Apr 23 18:03:29.455270 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:29.455132 2572 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/9742f92c-6d03-44d2-892a-06e0d31fab64-kserve-provision-location\") on node \"ip-10-0-133-178.ec2.internal\" DevicePath \"\"" Apr 23 18:03:30.208267 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:30.208235 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" event={"ID":"9742f92c-6d03-44d2-892a-06e0d31fab64","Type":"ContainerDied","Data":"f0e8e969da609dcf578abc24ade4ba399212fe0a02c1bea4ea1f2f3fb2d89df6"} Apr 23 18:03:30.208714 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:30.208278 2572 scope.go:117] "RemoveContainer" containerID="9f9f67ac07452cb29d85eb67e2bfc5ab0300d7bf065a230afc7e94d8f5e5d9d8" Apr 23 18:03:30.208714 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:30.208248 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c" Apr 23 18:03:30.215556 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:30.215538 2572 scope.go:117] "RemoveContainer" containerID="11ab4d82e15beddd3ebd63914ba52d701d210838b6872a7f01df1ba751376a1b" Apr 23 18:03:30.222138 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:30.222115 2572 scope.go:117] "RemoveContainer" containerID="1d66012276304452a7699f6b6baf6a59ebcb3d74461f39ac42f1ce6986f9721f" Apr 23 18:03:30.226762 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:30.226739 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c"] Apr 23 18:03:30.231437 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:30.231417 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-rzx5c"] Apr 23 18:03:31.418154 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:31.418116 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9742f92c-6d03-44d2-892a-06e0d31fab64" path="/var/lib/kubelet/pods/9742f92c-6d03-44d2-892a-06e0d31fab64/volumes" Apr 23 18:03:33.171628 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:33.171584 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-5894656ff9-2kqdb" podUID="58386c59-d272-43c1-b6bc-ad193ef4461e" containerName="model-chainer" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:03:38.171479 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:38.171443 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-5894656ff9-2kqdb" podUID="58386c59-d272-43c1-b6bc-ad193ef4461e" containerName="model-chainer" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:03:38.171934 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:38.171552 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/model-chainer-5894656ff9-2kqdb" Apr 23 18:03:43.170997 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:43.170950 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-5894656ff9-2kqdb" podUID="58386c59-d272-43c1-b6bc-ad193ef4461e" containerName="model-chainer" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:03:46.001486 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:46.001396 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/switch-graph-3d086-59947dfbb9-jnm97"] Apr 23 18:03:46.001824 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:46.001657 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9742f92c-6d03-44d2-892a-06e0d31fab64" containerName="storage-initializer" Apr 23 18:03:46.001824 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:46.001667 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="9742f92c-6d03-44d2-892a-06e0d31fab64" containerName="storage-initializer" Apr 23 18:03:46.001824 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:46.001678 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9742f92c-6d03-44d2-892a-06e0d31fab64" containerName="kube-rbac-proxy" Apr 23 18:03:46.001824 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:46.001684 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="9742f92c-6d03-44d2-892a-06e0d31fab64" containerName="kube-rbac-proxy" Apr 23 18:03:46.001824 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:46.001701 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9742f92c-6d03-44d2-892a-06e0d31fab64" containerName="kserve-container" Apr 23 18:03:46.001824 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:46.001707 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="9742f92c-6d03-44d2-892a-06e0d31fab64" containerName="kserve-container" Apr 23 18:03:46.001824 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:46.001713 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="413f510b-b6a9-44b1-8428-3a0e318ea683" containerName="switch-graph-e7d75" Apr 23 18:03:46.001824 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:46.001718 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="413f510b-b6a9-44b1-8428-3a0e318ea683" containerName="switch-graph-e7d75" Apr 23 18:03:46.001824 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:46.001755 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="9742f92c-6d03-44d2-892a-06e0d31fab64" containerName="kserve-container" Apr 23 18:03:46.001824 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:46.001762 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="413f510b-b6a9-44b1-8428-3a0e318ea683" containerName="switch-graph-e7d75" Apr 23 18:03:46.001824 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:46.001772 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="9742f92c-6d03-44d2-892a-06e0d31fab64" containerName="kube-rbac-proxy" Apr 23 18:03:46.005846 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:46.005828 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-3d086-59947dfbb9-jnm97" Apr 23 18:03:46.007622 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:46.007599 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"switch-graph-3d086-kube-rbac-proxy-sar-config\"" Apr 23 18:03:46.007681 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:46.007671 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"switch-graph-3d086-serving-cert\"" Apr 23 18:03:46.019035 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:46.019009 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/switch-graph-3d086-59947dfbb9-jnm97"] Apr 23 18:03:46.179943 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:46.179908 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e72973c-b887-4181-b2d5-936ddc1b927f-openshift-service-ca-bundle\") pod \"switch-graph-3d086-59947dfbb9-jnm97\" (UID: \"8e72973c-b887-4181-b2d5-936ddc1b927f\") " pod="kserve-ci-e2e-test/switch-graph-3d086-59947dfbb9-jnm97" Apr 23 18:03:46.180106 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:46.179959 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8e72973c-b887-4181-b2d5-936ddc1b927f-proxy-tls\") pod \"switch-graph-3d086-59947dfbb9-jnm97\" (UID: \"8e72973c-b887-4181-b2d5-936ddc1b927f\") " pod="kserve-ci-e2e-test/switch-graph-3d086-59947dfbb9-jnm97" Apr 23 18:03:46.281388 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:46.281300 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e72973c-b887-4181-b2d5-936ddc1b927f-openshift-service-ca-bundle\") pod \"switch-graph-3d086-59947dfbb9-jnm97\" (UID: \"8e72973c-b887-4181-b2d5-936ddc1b927f\") " pod="kserve-ci-e2e-test/switch-graph-3d086-59947dfbb9-jnm97" Apr 23 18:03:46.281388 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:46.281351 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8e72973c-b887-4181-b2d5-936ddc1b927f-proxy-tls\") pod \"switch-graph-3d086-59947dfbb9-jnm97\" (UID: \"8e72973c-b887-4181-b2d5-936ddc1b927f\") " pod="kserve-ci-e2e-test/switch-graph-3d086-59947dfbb9-jnm97" Apr 23 18:03:46.281964 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:46.281936 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e72973c-b887-4181-b2d5-936ddc1b927f-openshift-service-ca-bundle\") pod \"switch-graph-3d086-59947dfbb9-jnm97\" (UID: \"8e72973c-b887-4181-b2d5-936ddc1b927f\") " pod="kserve-ci-e2e-test/switch-graph-3d086-59947dfbb9-jnm97" Apr 23 18:03:46.283645 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:46.283621 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8e72973c-b887-4181-b2d5-936ddc1b927f-proxy-tls\") pod \"switch-graph-3d086-59947dfbb9-jnm97\" (UID: \"8e72973c-b887-4181-b2d5-936ddc1b927f\") " pod="kserve-ci-e2e-test/switch-graph-3d086-59947dfbb9-jnm97" Apr 23 18:03:46.320679 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:46.320656 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-3d086-59947dfbb9-jnm97" Apr 23 18:03:46.437249 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:46.437216 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/switch-graph-3d086-59947dfbb9-jnm97"] Apr 23 18:03:46.441166 ip-10-0-133-178 kubenswrapper[2572]: W0423 18:03:46.441140 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e72973c_b887_4181_b2d5_936ddc1b927f.slice/crio-df8323c1866e0f7707006fbd120dfad6c34fe164127dd0bdb51c602bdb9652f4 WatchSource:0}: Error finding container df8323c1866e0f7707006fbd120dfad6c34fe164127dd0bdb51c602bdb9652f4: Status 404 returned error can't find the container with id df8323c1866e0f7707006fbd120dfad6c34fe164127dd0bdb51c602bdb9652f4 Apr 23 18:03:47.256863 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:47.256823 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-3d086-59947dfbb9-jnm97" event={"ID":"8e72973c-b887-4181-b2d5-936ddc1b927f","Type":"ContainerStarted","Data":"72c3eae285e76b79ebf5d18771557ad2cce501d4eadebad27b21a2a304910111"} Apr 23 18:03:47.256863 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:47.256861 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-3d086-59947dfbb9-jnm97" event={"ID":"8e72973c-b887-4181-b2d5-936ddc1b927f","Type":"ContainerStarted","Data":"df8323c1866e0f7707006fbd120dfad6c34fe164127dd0bdb51c602bdb9652f4"} Apr 23 18:03:47.257370 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:47.256898 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/switch-graph-3d086-59947dfbb9-jnm97" Apr 23 18:03:47.274020 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:47.273903 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/switch-graph-3d086-59947dfbb9-jnm97" podStartSLOduration=2.273884584 podStartE2EDuration="2.273884584s" podCreationTimestamp="2026-04-23 18:03:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:03:47.273167825 +0000 UTC m=+702.468072552" watchObservedRunningTime="2026-04-23 18:03:47.273884584 +0000 UTC m=+702.468789313" Apr 23 18:03:48.171701 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:48.171654 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-5894656ff9-2kqdb" podUID="58386c59-d272-43c1-b6bc-ad193ef4461e" containerName="model-chainer" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:03:53.171507 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:53.171463 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-5894656ff9-2kqdb" podUID="58386c59-d272-43c1-b6bc-ad193ef4461e" containerName="model-chainer" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:03:53.265763 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:53.265738 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/switch-graph-3d086-59947dfbb9-jnm97" Apr 23 18:03:55.791909 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:55.791885 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-5894656ff9-2kqdb" Apr 23 18:03:55.845667 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:55.845642 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58386c59-d272-43c1-b6bc-ad193ef4461e-openshift-service-ca-bundle\") pod \"58386c59-d272-43c1-b6bc-ad193ef4461e\" (UID: \"58386c59-d272-43c1-b6bc-ad193ef4461e\") " Apr 23 18:03:55.845801 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:55.845686 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/58386c59-d272-43c1-b6bc-ad193ef4461e-proxy-tls\") pod \"58386c59-d272-43c1-b6bc-ad193ef4461e\" (UID: \"58386c59-d272-43c1-b6bc-ad193ef4461e\") " Apr 23 18:03:55.845961 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:55.845939 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58386c59-d272-43c1-b6bc-ad193ef4461e-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "58386c59-d272-43c1-b6bc-ad193ef4461e" (UID: "58386c59-d272-43c1-b6bc-ad193ef4461e"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:03:55.847615 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:55.847591 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58386c59-d272-43c1-b6bc-ad193ef4461e-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "58386c59-d272-43c1-b6bc-ad193ef4461e" (UID: "58386c59-d272-43c1-b6bc-ad193ef4461e"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:03:55.946164 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:55.946082 2572 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58386c59-d272-43c1-b6bc-ad193ef4461e-openshift-service-ca-bundle\") on node \"ip-10-0-133-178.ec2.internal\" DevicePath \"\"" Apr 23 18:03:55.946164 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:55.946114 2572 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/58386c59-d272-43c1-b6bc-ad193ef4461e-proxy-tls\") on node \"ip-10-0-133-178.ec2.internal\" DevicePath \"\"" Apr 23 18:03:56.283136 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:56.283105 2572 generic.go:358] "Generic (PLEG): container finished" podID="58386c59-d272-43c1-b6bc-ad193ef4461e" containerID="cfb157e98e95ad60382ec5f13d054f26e48922b7a9713914c452d7a97f3af3b5" exitCode=0 Apr 23 18:03:56.283327 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:56.283161 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-5894656ff9-2kqdb" Apr 23 18:03:56.283327 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:56.283192 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-5894656ff9-2kqdb" event={"ID":"58386c59-d272-43c1-b6bc-ad193ef4461e","Type":"ContainerDied","Data":"cfb157e98e95ad60382ec5f13d054f26e48922b7a9713914c452d7a97f3af3b5"} Apr 23 18:03:56.283327 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:56.283236 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-5894656ff9-2kqdb" event={"ID":"58386c59-d272-43c1-b6bc-ad193ef4461e","Type":"ContainerDied","Data":"ab959e01dc61ed02111286caa49e2f9e86e4252d509d83c3354a31807e51a74f"} Apr 23 18:03:56.283327 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:56.283259 2572 scope.go:117] "RemoveContainer" containerID="cfb157e98e95ad60382ec5f13d054f26e48922b7a9713914c452d7a97f3af3b5" Apr 23 18:03:56.290899 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:56.290876 2572 scope.go:117] "RemoveContainer" containerID="cfb157e98e95ad60382ec5f13d054f26e48922b7a9713914c452d7a97f3af3b5" Apr 23 18:03:56.291155 ip-10-0-133-178 kubenswrapper[2572]: E0423 18:03:56.291136 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfb157e98e95ad60382ec5f13d054f26e48922b7a9713914c452d7a97f3af3b5\": container with ID starting with cfb157e98e95ad60382ec5f13d054f26e48922b7a9713914c452d7a97f3af3b5 not found: ID does not exist" containerID="cfb157e98e95ad60382ec5f13d054f26e48922b7a9713914c452d7a97f3af3b5" Apr 23 18:03:56.291222 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:56.291169 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfb157e98e95ad60382ec5f13d054f26e48922b7a9713914c452d7a97f3af3b5"} err="failed to get container status \"cfb157e98e95ad60382ec5f13d054f26e48922b7a9713914c452d7a97f3af3b5\": rpc error: code = NotFound desc = could not find container \"cfb157e98e95ad60382ec5f13d054f26e48922b7a9713914c452d7a97f3af3b5\": container with ID starting with cfb157e98e95ad60382ec5f13d054f26e48922b7a9713914c452d7a97f3af3b5 not found: ID does not exist" Apr 23 18:03:56.304040 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:56.304015 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/model-chainer-5894656ff9-2kqdb"] Apr 23 18:03:56.307907 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:56.307885 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/model-chainer-5894656ff9-2kqdb"] Apr 23 18:03:57.417717 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:03:57.417688 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58386c59-d272-43c1-b6bc-ad193ef4461e" path="/var/lib/kubelet/pods/58386c59-d272-43c1-b6bc-ad193ef4461e/volumes" Apr 23 18:04:25.875737 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:04:25.875698 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/sequence-graph-0065f-6c7bbccfdc-477lq"] Apr 23 18:04:25.876593 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:04:25.876074 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="58386c59-d272-43c1-b6bc-ad193ef4461e" containerName="model-chainer" Apr 23 18:04:25.876593 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:04:25.876091 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="58386c59-d272-43c1-b6bc-ad193ef4461e" containerName="model-chainer" Apr 23 18:04:25.876593 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:04:25.876149 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="58386c59-d272-43c1-b6bc-ad193ef4461e" containerName="model-chainer" Apr 23 18:04:25.879243 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:04:25.879222 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-0065f-6c7bbccfdc-477lq" Apr 23 18:04:25.881287 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:04:25.881265 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"sequence-graph-0065f-serving-cert\"" Apr 23 18:04:25.881584 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:04:25.881568 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"sequence-graph-0065f-kube-rbac-proxy-sar-config\"" Apr 23 18:04:25.889203 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:04:25.889183 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-0065f-6c7bbccfdc-477lq"] Apr 23 18:04:25.945612 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:04:25.945579 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/93c16c6f-faad-4314-abe0-0789a35a76bb-proxy-tls\") pod \"sequence-graph-0065f-6c7bbccfdc-477lq\" (UID: \"93c16c6f-faad-4314-abe0-0789a35a76bb\") " pod="kserve-ci-e2e-test/sequence-graph-0065f-6c7bbccfdc-477lq" Apr 23 18:04:25.945760 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:04:25.945652 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93c16c6f-faad-4314-abe0-0789a35a76bb-openshift-service-ca-bundle\") pod \"sequence-graph-0065f-6c7bbccfdc-477lq\" (UID: \"93c16c6f-faad-4314-abe0-0789a35a76bb\") " pod="kserve-ci-e2e-test/sequence-graph-0065f-6c7bbccfdc-477lq" Apr 23 18:04:26.046966 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:04:26.046929 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93c16c6f-faad-4314-abe0-0789a35a76bb-openshift-service-ca-bundle\") pod \"sequence-graph-0065f-6c7bbccfdc-477lq\" (UID: \"93c16c6f-faad-4314-abe0-0789a35a76bb\") " pod="kserve-ci-e2e-test/sequence-graph-0065f-6c7bbccfdc-477lq" Apr 23 18:04:26.047101 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:04:26.046984 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/93c16c6f-faad-4314-abe0-0789a35a76bb-proxy-tls\") pod \"sequence-graph-0065f-6c7bbccfdc-477lq\" (UID: \"93c16c6f-faad-4314-abe0-0789a35a76bb\") " pod="kserve-ci-e2e-test/sequence-graph-0065f-6c7bbccfdc-477lq" Apr 23 18:04:26.047101 ip-10-0-133-178 kubenswrapper[2572]: E0423 18:04:26.047072 2572 secret.go:189] Couldn't get secret kserve-ci-e2e-test/sequence-graph-0065f-serving-cert: secret "sequence-graph-0065f-serving-cert" not found Apr 23 18:04:26.047193 ip-10-0-133-178 kubenswrapper[2572]: E0423 18:04:26.047135 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/93c16c6f-faad-4314-abe0-0789a35a76bb-proxy-tls podName:93c16c6f-faad-4314-abe0-0789a35a76bb nodeName:}" failed. No retries permitted until 2026-04-23 18:04:26.547117799 +0000 UTC m=+741.742022504 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/93c16c6f-faad-4314-abe0-0789a35a76bb-proxy-tls") pod "sequence-graph-0065f-6c7bbccfdc-477lq" (UID: "93c16c6f-faad-4314-abe0-0789a35a76bb") : secret "sequence-graph-0065f-serving-cert" not found Apr 23 18:04:26.047610 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:04:26.047592 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93c16c6f-faad-4314-abe0-0789a35a76bb-openshift-service-ca-bundle\") pod \"sequence-graph-0065f-6c7bbccfdc-477lq\" (UID: \"93c16c6f-faad-4314-abe0-0789a35a76bb\") " pod="kserve-ci-e2e-test/sequence-graph-0065f-6c7bbccfdc-477lq" Apr 23 18:04:26.550344 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:04:26.550304 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/93c16c6f-faad-4314-abe0-0789a35a76bb-proxy-tls\") pod \"sequence-graph-0065f-6c7bbccfdc-477lq\" (UID: \"93c16c6f-faad-4314-abe0-0789a35a76bb\") " pod="kserve-ci-e2e-test/sequence-graph-0065f-6c7bbccfdc-477lq" Apr 23 18:04:26.552735 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:04:26.552701 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/93c16c6f-faad-4314-abe0-0789a35a76bb-proxy-tls\") pod \"sequence-graph-0065f-6c7bbccfdc-477lq\" (UID: \"93c16c6f-faad-4314-abe0-0789a35a76bb\") " pod="kserve-ci-e2e-test/sequence-graph-0065f-6c7bbccfdc-477lq" Apr 23 18:04:26.789540 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:04:26.789502 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-0065f-6c7bbccfdc-477lq" Apr 23 18:04:26.904680 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:04:26.904650 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-0065f-6c7bbccfdc-477lq"] Apr 23 18:04:26.907821 ip-10-0-133-178 kubenswrapper[2572]: W0423 18:04:26.907791 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93c16c6f_faad_4314_abe0_0789a35a76bb.slice/crio-88eda1aa1d5d3679a7eb7eee1da7700ae6b8f66d040be1812562760fc8fc907e WatchSource:0}: Error finding container 88eda1aa1d5d3679a7eb7eee1da7700ae6b8f66d040be1812562760fc8fc907e: Status 404 returned error can't find the container with id 88eda1aa1d5d3679a7eb7eee1da7700ae6b8f66d040be1812562760fc8fc907e Apr 23 18:04:27.365677 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:04:27.365641 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-0065f-6c7bbccfdc-477lq" event={"ID":"93c16c6f-faad-4314-abe0-0789a35a76bb","Type":"ContainerStarted","Data":"e9d57e9a31fb48e744931d130bcfb83bae984cab11c529f0dcbffe89bca7159d"} Apr 23 18:04:27.365677 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:04:27.365680 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-0065f-6c7bbccfdc-477lq" event={"ID":"93c16c6f-faad-4314-abe0-0789a35a76bb","Type":"ContainerStarted","Data":"88eda1aa1d5d3679a7eb7eee1da7700ae6b8f66d040be1812562760fc8fc907e"} Apr 23 18:04:27.365892 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:04:27.365774 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/sequence-graph-0065f-6c7bbccfdc-477lq" Apr 23 18:04:27.381438 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:04:27.381373 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/sequence-graph-0065f-6c7bbccfdc-477lq" podStartSLOduration=2.381358266 podStartE2EDuration="2.381358266s" podCreationTimestamp="2026-04-23 18:04:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:04:27.381073216 +0000 UTC m=+742.575977944" watchObservedRunningTime="2026-04-23 18:04:27.381358266 +0000 UTC m=+742.576262992" Apr 23 18:04:33.374818 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:04:33.374792 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/sequence-graph-0065f-6c7bbccfdc-477lq" Apr 23 18:06:17.505482 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:17.505452 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-fbf6f99cd-qhqr9"] Apr 23 18:06:17.508495 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:17.508478 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-fbf6f99cd-qhqr9" Apr 23 18:06:17.511013 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:17.510985 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Apr 23 18:06:17.511013 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:17.510997 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Apr 23 18:06:17.511211 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:17.511060 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Apr 23 18:06:17.511211 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:17.511075 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Apr 23 18:06:17.511211 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:17.511089 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Apr 23 18:06:17.511211 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:17.510993 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Apr 23 18:06:17.511211 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:17.510998 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Apr 23 18:06:17.511211 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:17.510993 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-5lt52\"" Apr 23 18:06:17.515792 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:17.515761 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Apr 23 18:06:17.530608 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:17.530575 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-fbf6f99cd-qhqr9"] Apr 23 18:06:17.545104 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:17.545079 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0d442ca0-87d3-49df-afc8-a3323de055cd-console-serving-cert\") pod \"console-fbf6f99cd-qhqr9\" (UID: \"0d442ca0-87d3-49df-afc8-a3323de055cd\") " pod="openshift-console/console-fbf6f99cd-qhqr9" Apr 23 18:06:17.545104 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:17.545110 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ch8k\" (UniqueName: \"kubernetes.io/projected/0d442ca0-87d3-49df-afc8-a3323de055cd-kube-api-access-5ch8k\") pod \"console-fbf6f99cd-qhqr9\" (UID: \"0d442ca0-87d3-49df-afc8-a3323de055cd\") " pod="openshift-console/console-fbf6f99cd-qhqr9" Apr 23 18:06:17.545267 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:17.545149 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0d442ca0-87d3-49df-afc8-a3323de055cd-oauth-serving-cert\") pod \"console-fbf6f99cd-qhqr9\" (UID: \"0d442ca0-87d3-49df-afc8-a3323de055cd\") " pod="openshift-console/console-fbf6f99cd-qhqr9" Apr 23 18:06:17.545267 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:17.545242 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0d442ca0-87d3-49df-afc8-a3323de055cd-console-config\") pod \"console-fbf6f99cd-qhqr9\" (UID: \"0d442ca0-87d3-49df-afc8-a3323de055cd\") " pod="openshift-console/console-fbf6f99cd-qhqr9" Apr 23 18:06:17.545418 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:17.545382 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d442ca0-87d3-49df-afc8-a3323de055cd-trusted-ca-bundle\") pod \"console-fbf6f99cd-qhqr9\" (UID: \"0d442ca0-87d3-49df-afc8-a3323de055cd\") " pod="openshift-console/console-fbf6f99cd-qhqr9" Apr 23 18:06:17.545487 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:17.545446 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0d442ca0-87d3-49df-afc8-a3323de055cd-console-oauth-config\") pod \"console-fbf6f99cd-qhqr9\" (UID: \"0d442ca0-87d3-49df-afc8-a3323de055cd\") " pod="openshift-console/console-fbf6f99cd-qhqr9" Apr 23 18:06:17.545560 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:17.545544 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0d442ca0-87d3-49df-afc8-a3323de055cd-service-ca\") pod \"console-fbf6f99cd-qhqr9\" (UID: \"0d442ca0-87d3-49df-afc8-a3323de055cd\") " pod="openshift-console/console-fbf6f99cd-qhqr9" Apr 23 18:06:17.646293 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:17.646262 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0d442ca0-87d3-49df-afc8-a3323de055cd-oauth-serving-cert\") pod \"console-fbf6f99cd-qhqr9\" (UID: \"0d442ca0-87d3-49df-afc8-a3323de055cd\") " pod="openshift-console/console-fbf6f99cd-qhqr9" Apr 23 18:06:17.646478 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:17.646301 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0d442ca0-87d3-49df-afc8-a3323de055cd-console-config\") pod \"console-fbf6f99cd-qhqr9\" (UID: \"0d442ca0-87d3-49df-afc8-a3323de055cd\") " pod="openshift-console/console-fbf6f99cd-qhqr9" Apr 23 18:06:17.646478 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:17.646414 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d442ca0-87d3-49df-afc8-a3323de055cd-trusted-ca-bundle\") pod \"console-fbf6f99cd-qhqr9\" (UID: \"0d442ca0-87d3-49df-afc8-a3323de055cd\") " pod="openshift-console/console-fbf6f99cd-qhqr9" Apr 23 18:06:17.646478 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:17.646434 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0d442ca0-87d3-49df-afc8-a3323de055cd-console-oauth-config\") pod \"console-fbf6f99cd-qhqr9\" (UID: \"0d442ca0-87d3-49df-afc8-a3323de055cd\") " pod="openshift-console/console-fbf6f99cd-qhqr9" Apr 23 18:06:17.646478 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:17.646457 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0d442ca0-87d3-49df-afc8-a3323de055cd-service-ca\") pod \"console-fbf6f99cd-qhqr9\" (UID: \"0d442ca0-87d3-49df-afc8-a3323de055cd\") " pod="openshift-console/console-fbf6f99cd-qhqr9" Apr 23 18:06:17.646706 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:17.646481 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0d442ca0-87d3-49df-afc8-a3323de055cd-console-serving-cert\") pod \"console-fbf6f99cd-qhqr9\" (UID: \"0d442ca0-87d3-49df-afc8-a3323de055cd\") " pod="openshift-console/console-fbf6f99cd-qhqr9" Apr 23 18:06:17.646706 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:17.646497 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5ch8k\" (UniqueName: \"kubernetes.io/projected/0d442ca0-87d3-49df-afc8-a3323de055cd-kube-api-access-5ch8k\") pod \"console-fbf6f99cd-qhqr9\" (UID: \"0d442ca0-87d3-49df-afc8-a3323de055cd\") " pod="openshift-console/console-fbf6f99cd-qhqr9" Apr 23 18:06:17.647099 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:17.647071 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0d442ca0-87d3-49df-afc8-a3323de055cd-console-config\") pod \"console-fbf6f99cd-qhqr9\" (UID: \"0d442ca0-87d3-49df-afc8-a3323de055cd\") " pod="openshift-console/console-fbf6f99cd-qhqr9" Apr 23 18:06:17.647220 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:17.647079 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0d442ca0-87d3-49df-afc8-a3323de055cd-oauth-serving-cert\") pod \"console-fbf6f99cd-qhqr9\" (UID: \"0d442ca0-87d3-49df-afc8-a3323de055cd\") " pod="openshift-console/console-fbf6f99cd-qhqr9" Apr 23 18:06:17.647220 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:17.647121 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0d442ca0-87d3-49df-afc8-a3323de055cd-service-ca\") pod \"console-fbf6f99cd-qhqr9\" (UID: \"0d442ca0-87d3-49df-afc8-a3323de055cd\") " pod="openshift-console/console-fbf6f99cd-qhqr9" Apr 23 18:06:17.647309 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:17.647247 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d442ca0-87d3-49df-afc8-a3323de055cd-trusted-ca-bundle\") pod \"console-fbf6f99cd-qhqr9\" (UID: \"0d442ca0-87d3-49df-afc8-a3323de055cd\") " pod="openshift-console/console-fbf6f99cd-qhqr9" Apr 23 18:06:17.649007 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:17.648984 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0d442ca0-87d3-49df-afc8-a3323de055cd-console-serving-cert\") pod \"console-fbf6f99cd-qhqr9\" (UID: \"0d442ca0-87d3-49df-afc8-a3323de055cd\") " pod="openshift-console/console-fbf6f99cd-qhqr9" Apr 23 18:06:17.649125 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:17.649004 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0d442ca0-87d3-49df-afc8-a3323de055cd-console-oauth-config\") pod \"console-fbf6f99cd-qhqr9\" (UID: \"0d442ca0-87d3-49df-afc8-a3323de055cd\") " pod="openshift-console/console-fbf6f99cd-qhqr9" Apr 23 18:06:17.654762 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:17.654741 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ch8k\" (UniqueName: \"kubernetes.io/projected/0d442ca0-87d3-49df-afc8-a3323de055cd-kube-api-access-5ch8k\") pod \"console-fbf6f99cd-qhqr9\" (UID: \"0d442ca0-87d3-49df-afc8-a3323de055cd\") " pod="openshift-console/console-fbf6f99cd-qhqr9" Apr 23 18:06:17.818749 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:17.818669 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-fbf6f99cd-qhqr9" Apr 23 18:06:17.937654 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:17.937624 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-fbf6f99cd-qhqr9"] Apr 23 18:06:17.941033 ip-10-0-133-178 kubenswrapper[2572]: W0423 18:06:17.941010 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d442ca0_87d3_49df_afc8_a3323de055cd.slice/crio-c9ef16638079d72b142fb9af2e7236461ea6af554c9de97ea0483b83f0eb0747 WatchSource:0}: Error finding container c9ef16638079d72b142fb9af2e7236461ea6af554c9de97ea0483b83f0eb0747: Status 404 returned error can't find the container with id c9ef16638079d72b142fb9af2e7236461ea6af554c9de97ea0483b83f0eb0747 Apr 23 18:06:18.654661 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:18.654625 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-fbf6f99cd-qhqr9" event={"ID":"0d442ca0-87d3-49df-afc8-a3323de055cd","Type":"ContainerStarted","Data":"422ed038f55a1020cdc56f756e66d44262fb1f23b41ae9a5c62a7ef73e6f1338"} Apr 23 18:06:18.655029 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:18.654667 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-fbf6f99cd-qhqr9" event={"ID":"0d442ca0-87d3-49df-afc8-a3323de055cd","Type":"ContainerStarted","Data":"c9ef16638079d72b142fb9af2e7236461ea6af554c9de97ea0483b83f0eb0747"} Apr 23 18:06:18.674412 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:18.674359 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-fbf6f99cd-qhqr9" podStartSLOduration=1.674345024 podStartE2EDuration="1.674345024s" podCreationTimestamp="2026-04-23 18:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:06:18.672842163 +0000 UTC m=+853.867746891" watchObservedRunningTime="2026-04-23 18:06:18.674345024 +0000 UTC m=+853.869249751" Apr 23 18:06:27.819084 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:27.819046 2572 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-fbf6f99cd-qhqr9" Apr 23 18:06:27.819084 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:27.819087 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-fbf6f99cd-qhqr9" Apr 23 18:06:27.823833 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:27.823810 2572 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-fbf6f99cd-qhqr9" Apr 23 18:06:28.687210 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:06:28.687172 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-fbf6f99cd-qhqr9" Apr 23 18:07:05.363584 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:07:05.363501 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9pcc_3c2da17f-0591-4850-9fa2-fde2a8c1a8d5/ovn-acl-logging/0.log" Apr 23 18:07:05.363584 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:07:05.363535 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9pcc_3c2da17f-0591-4850-9fa2-fde2a8c1a8d5/ovn-acl-logging/0.log" Apr 23 18:07:05.368572 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:07:05.368548 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_2e27a1d033408744b4b8c34c52f01b43/kube-rbac-proxy-crio/4.log" Apr 23 18:07:05.368718 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:07:05.368678 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_2e27a1d033408744b4b8c34c52f01b43/kube-rbac-proxy-crio/4.log" Apr 23 18:12:00.560928 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:12:00.560880 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/switch-graph-3d086-59947dfbb9-jnm97"] Apr 23 18:12:00.561510 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:12:00.561192 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/switch-graph-3d086-59947dfbb9-jnm97" podUID="8e72973c-b887-4181-b2d5-936ddc1b927f" containerName="switch-graph-3d086" containerID="cri-o://72c3eae285e76b79ebf5d18771557ad2cce501d4eadebad27b21a2a304910111" gracePeriod=30 Apr 23 18:12:03.263790 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:12:03.263751 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-3d086-59947dfbb9-jnm97" podUID="8e72973c-b887-4181-b2d5-936ddc1b927f" containerName="switch-graph-3d086" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:12:05.382757 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:12:05.382727 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9pcc_3c2da17f-0591-4850-9fa2-fde2a8c1a8d5/ovn-acl-logging/0.log" Apr 23 18:12:05.386183 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:12:05.386160 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9pcc_3c2da17f-0591-4850-9fa2-fde2a8c1a8d5/ovn-acl-logging/0.log" Apr 23 18:12:05.389872 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:12:05.389854 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_2e27a1d033408744b4b8c34c52f01b43/kube-rbac-proxy-crio/4.log" Apr 23 18:12:05.390887 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:12:05.390869 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_2e27a1d033408744b4b8c34c52f01b43/kube-rbac-proxy-crio/4.log" Apr 23 18:12:08.264378 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:12:08.264338 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-3d086-59947dfbb9-jnm97" podUID="8e72973c-b887-4181-b2d5-936ddc1b927f" containerName="switch-graph-3d086" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:12:13.263999 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:12:13.263959 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-3d086-59947dfbb9-jnm97" podUID="8e72973c-b887-4181-b2d5-936ddc1b927f" containerName="switch-graph-3d086" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:12:13.264418 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:12:13.264072 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/switch-graph-3d086-59947dfbb9-jnm97" Apr 23 18:12:18.264077 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:12:18.264035 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-3d086-59947dfbb9-jnm97" podUID="8e72973c-b887-4181-b2d5-936ddc1b927f" containerName="switch-graph-3d086" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:12:23.264749 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:12:23.264714 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-3d086-59947dfbb9-jnm97" podUID="8e72973c-b887-4181-b2d5-936ddc1b927f" containerName="switch-graph-3d086" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:12:28.263877 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:12:28.263831 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-3d086-59947dfbb9-jnm97" podUID="8e72973c-b887-4181-b2d5-936ddc1b927f" containerName="switch-graph-3d086" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:12:30.630024 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:12:30.629989 2572 generic.go:358] "Generic (PLEG): container finished" podID="8e72973c-b887-4181-b2d5-936ddc1b927f" containerID="72c3eae285e76b79ebf5d18771557ad2cce501d4eadebad27b21a2a304910111" exitCode=0 Apr 23 18:12:30.630387 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:12:30.630048 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-3d086-59947dfbb9-jnm97" event={"ID":"8e72973c-b887-4181-b2d5-936ddc1b927f","Type":"ContainerDied","Data":"72c3eae285e76b79ebf5d18771557ad2cce501d4eadebad27b21a2a304910111"} Apr 23 18:12:30.697254 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:12:30.697234 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-3d086-59947dfbb9-jnm97" Apr 23 18:12:30.831636 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:12:30.831545 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e72973c-b887-4181-b2d5-936ddc1b927f-openshift-service-ca-bundle\") pod \"8e72973c-b887-4181-b2d5-936ddc1b927f\" (UID: \"8e72973c-b887-4181-b2d5-936ddc1b927f\") " Apr 23 18:12:30.831636 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:12:30.831594 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8e72973c-b887-4181-b2d5-936ddc1b927f-proxy-tls\") pod \"8e72973c-b887-4181-b2d5-936ddc1b927f\" (UID: \"8e72973c-b887-4181-b2d5-936ddc1b927f\") " Apr 23 18:12:30.831921 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:12:30.831896 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e72973c-b887-4181-b2d5-936ddc1b927f-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "8e72973c-b887-4181-b2d5-936ddc1b927f" (UID: "8e72973c-b887-4181-b2d5-936ddc1b927f"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:12:30.833638 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:12:30.833613 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e72973c-b887-4181-b2d5-936ddc1b927f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "8e72973c-b887-4181-b2d5-936ddc1b927f" (UID: "8e72973c-b887-4181-b2d5-936ddc1b927f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:12:30.932182 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:12:30.932147 2572 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e72973c-b887-4181-b2d5-936ddc1b927f-openshift-service-ca-bundle\") on node \"ip-10-0-133-178.ec2.internal\" DevicePath \"\"" Apr 23 18:12:30.932182 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:12:30.932181 2572 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8e72973c-b887-4181-b2d5-936ddc1b927f-proxy-tls\") on node \"ip-10-0-133-178.ec2.internal\" DevicePath \"\"" Apr 23 18:12:31.633704 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:12:31.633664 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-3d086-59947dfbb9-jnm97" event={"ID":"8e72973c-b887-4181-b2d5-936ddc1b927f","Type":"ContainerDied","Data":"df8323c1866e0f7707006fbd120dfad6c34fe164127dd0bdb51c602bdb9652f4"} Apr 23 18:12:31.633704 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:12:31.633709 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-3d086-59947dfbb9-jnm97" Apr 23 18:12:31.634227 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:12:31.633721 2572 scope.go:117] "RemoveContainer" containerID="72c3eae285e76b79ebf5d18771557ad2cce501d4eadebad27b21a2a304910111" Apr 23 18:12:31.650508 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:12:31.650471 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/switch-graph-3d086-59947dfbb9-jnm97"] Apr 23 18:12:31.656496 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:12:31.656468 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/switch-graph-3d086-59947dfbb9-jnm97"] Apr 23 18:12:33.418145 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:12:33.418113 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e72973c-b887-4181-b2d5-936ddc1b927f" path="/var/lib/kubelet/pods/8e72973c-b887-4181-b2d5-936ddc1b927f/volumes" Apr 23 18:12:40.592274 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:12:40.592243 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-0065f-6c7bbccfdc-477lq"] Apr 23 18:12:40.592663 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:12:40.592496 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/sequence-graph-0065f-6c7bbccfdc-477lq" podUID="93c16c6f-faad-4314-abe0-0789a35a76bb" containerName="sequence-graph-0065f" containerID="cri-o://e9d57e9a31fb48e744931d130bcfb83bae984cab11c529f0dcbffe89bca7159d" gracePeriod=30 Apr 23 18:12:43.373182 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:12:43.373143 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-0065f-6c7bbccfdc-477lq" podUID="93c16c6f-faad-4314-abe0-0789a35a76bb" containerName="sequence-graph-0065f" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:12:48.372906 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:12:48.372816 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-0065f-6c7bbccfdc-477lq" podUID="93c16c6f-faad-4314-abe0-0789a35a76bb" containerName="sequence-graph-0065f" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:12:53.373014 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:12:53.372972 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-0065f-6c7bbccfdc-477lq" podUID="93c16c6f-faad-4314-abe0-0789a35a76bb" containerName="sequence-graph-0065f" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:12:53.373381 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:12:53.373097 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/sequence-graph-0065f-6c7bbccfdc-477lq" Apr 23 18:12:58.372780 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:12:58.372744 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-0065f-6c7bbccfdc-477lq" podUID="93c16c6f-faad-4314-abe0-0789a35a76bb" containerName="sequence-graph-0065f" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:13:00.782394 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:00.782353 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-226e9-596665bfbf-p64qh"] Apr 23 18:13:00.782903 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:00.782749 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8e72973c-b887-4181-b2d5-936ddc1b927f" containerName="switch-graph-3d086" Apr 23 18:13:00.782903 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:00.782768 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e72973c-b887-4181-b2d5-936ddc1b927f" containerName="switch-graph-3d086" Apr 23 18:13:00.782903 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:00.782831 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="8e72973c-b887-4181-b2d5-936ddc1b927f" containerName="switch-graph-3d086" Apr 23 18:13:00.785617 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:00.785595 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-226e9-596665bfbf-p64qh" Apr 23 18:13:00.787667 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:00.787646 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"ensemble-graph-226e9-serving-cert\"" Apr 23 18:13:00.787667 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:00.787661 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"ensemble-graph-226e9-kube-rbac-proxy-sar-config\"" Apr 23 18:13:00.795623 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:00.795603 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-226e9-596665bfbf-p64qh"] Apr 23 18:13:00.853315 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:00.853283 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5fe1e4c4-4461-4367-9e98-75a1e61bc178-proxy-tls\") pod \"ensemble-graph-226e9-596665bfbf-p64qh\" (UID: \"5fe1e4c4-4461-4367-9e98-75a1e61bc178\") " pod="kserve-ci-e2e-test/ensemble-graph-226e9-596665bfbf-p64qh" Apr 23 18:13:00.853518 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:00.853345 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fe1e4c4-4461-4367-9e98-75a1e61bc178-openshift-service-ca-bundle\") pod \"ensemble-graph-226e9-596665bfbf-p64qh\" (UID: \"5fe1e4c4-4461-4367-9e98-75a1e61bc178\") " pod="kserve-ci-e2e-test/ensemble-graph-226e9-596665bfbf-p64qh" Apr 23 18:13:00.954609 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:00.954565 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fe1e4c4-4461-4367-9e98-75a1e61bc178-openshift-service-ca-bundle\") pod \"ensemble-graph-226e9-596665bfbf-p64qh\" (UID: \"5fe1e4c4-4461-4367-9e98-75a1e61bc178\") " pod="kserve-ci-e2e-test/ensemble-graph-226e9-596665bfbf-p64qh" Apr 23 18:13:00.954801 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:00.954655 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5fe1e4c4-4461-4367-9e98-75a1e61bc178-proxy-tls\") pod \"ensemble-graph-226e9-596665bfbf-p64qh\" (UID: \"5fe1e4c4-4461-4367-9e98-75a1e61bc178\") " pod="kserve-ci-e2e-test/ensemble-graph-226e9-596665bfbf-p64qh" Apr 23 18:13:00.954801 ip-10-0-133-178 kubenswrapper[2572]: E0423 18:13:00.954787 2572 secret.go:189] Couldn't get secret kserve-ci-e2e-test/ensemble-graph-226e9-serving-cert: secret "ensemble-graph-226e9-serving-cert" not found Apr 23 18:13:00.954925 ip-10-0-133-178 kubenswrapper[2572]: E0423 18:13:00.954859 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe1e4c4-4461-4367-9e98-75a1e61bc178-proxy-tls podName:5fe1e4c4-4461-4367-9e98-75a1e61bc178 nodeName:}" failed. No retries permitted until 2026-04-23 18:13:01.45483736 +0000 UTC m=+1256.649742073 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/5fe1e4c4-4461-4367-9e98-75a1e61bc178-proxy-tls") pod "ensemble-graph-226e9-596665bfbf-p64qh" (UID: "5fe1e4c4-4461-4367-9e98-75a1e61bc178") : secret "ensemble-graph-226e9-serving-cert" not found Apr 23 18:13:00.955245 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:00.955225 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fe1e4c4-4461-4367-9e98-75a1e61bc178-openshift-service-ca-bundle\") pod \"ensemble-graph-226e9-596665bfbf-p64qh\" (UID: \"5fe1e4c4-4461-4367-9e98-75a1e61bc178\") " pod="kserve-ci-e2e-test/ensemble-graph-226e9-596665bfbf-p64qh" Apr 23 18:13:01.458965 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:01.458928 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5fe1e4c4-4461-4367-9e98-75a1e61bc178-proxy-tls\") pod \"ensemble-graph-226e9-596665bfbf-p64qh\" (UID: \"5fe1e4c4-4461-4367-9e98-75a1e61bc178\") " pod="kserve-ci-e2e-test/ensemble-graph-226e9-596665bfbf-p64qh" Apr 23 18:13:01.461273 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:01.461253 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5fe1e4c4-4461-4367-9e98-75a1e61bc178-proxy-tls\") pod \"ensemble-graph-226e9-596665bfbf-p64qh\" (UID: \"5fe1e4c4-4461-4367-9e98-75a1e61bc178\") " pod="kserve-ci-e2e-test/ensemble-graph-226e9-596665bfbf-p64qh" Apr 23 18:13:01.694549 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:01.694510 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-226e9-596665bfbf-p64qh" Apr 23 18:13:01.816333 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:01.816304 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-226e9-596665bfbf-p64qh"] Apr 23 18:13:01.819078 ip-10-0-133-178 kubenswrapper[2572]: W0423 18:13:01.819053 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe1e4c4_4461_4367_9e98_75a1e61bc178.slice/crio-3c1927259818af4b50edf2baf556c1fbba3ef89b54331e0937b5d85ff5fc2985 WatchSource:0}: Error finding container 3c1927259818af4b50edf2baf556c1fbba3ef89b54331e0937b5d85ff5fc2985: Status 404 returned error can't find the container with id 3c1927259818af4b50edf2baf556c1fbba3ef89b54331e0937b5d85ff5fc2985 Apr 23 18:13:01.821143 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:01.821129 2572 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 18:13:02.719712 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:02.719672 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-226e9-596665bfbf-p64qh" event={"ID":"5fe1e4c4-4461-4367-9e98-75a1e61bc178","Type":"ContainerStarted","Data":"d98089537cb4f98af3e3e14f35d3b7260df779da8616c5b222fe2c49f7e22af8"} Apr 23 18:13:02.719712 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:02.719717 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-226e9-596665bfbf-p64qh" event={"ID":"5fe1e4c4-4461-4367-9e98-75a1e61bc178","Type":"ContainerStarted","Data":"3c1927259818af4b50edf2baf556c1fbba3ef89b54331e0937b5d85ff5fc2985"} Apr 23 18:13:02.719994 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:02.719812 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/ensemble-graph-226e9-596665bfbf-p64qh" Apr 23 18:13:03.372974 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:03.372931 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-0065f-6c7bbccfdc-477lq" podUID="93c16c6f-faad-4314-abe0-0789a35a76bb" containerName="sequence-graph-0065f" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:13:08.373140 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:08.373094 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-0065f-6c7bbccfdc-477lq" podUID="93c16c6f-faad-4314-abe0-0789a35a76bb" containerName="sequence-graph-0065f" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:13:08.729422 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:08.729376 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/ensemble-graph-226e9-596665bfbf-p64qh" Apr 23 18:13:08.746384 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:08.746337 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/ensemble-graph-226e9-596665bfbf-p64qh" podStartSLOduration=8.746323241 podStartE2EDuration="8.746323241s" podCreationTimestamp="2026-04-23 18:13:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:13:02.735101901 +0000 UTC m=+1257.930006631" watchObservedRunningTime="2026-04-23 18:13:08.746323241 +0000 UTC m=+1263.941227967" Apr 23 18:13:10.731634 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:10.731609 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-0065f-6c7bbccfdc-477lq" Apr 23 18:13:10.742301 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:10.742276 2572 generic.go:358] "Generic (PLEG): container finished" podID="93c16c6f-faad-4314-abe0-0789a35a76bb" containerID="e9d57e9a31fb48e744931d130bcfb83bae984cab11c529f0dcbffe89bca7159d" exitCode=0 Apr 23 18:13:10.742394 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:10.742323 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-0065f-6c7bbccfdc-477lq" Apr 23 18:13:10.742394 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:10.742355 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-0065f-6c7bbccfdc-477lq" event={"ID":"93c16c6f-faad-4314-abe0-0789a35a76bb","Type":"ContainerDied","Data":"e9d57e9a31fb48e744931d130bcfb83bae984cab11c529f0dcbffe89bca7159d"} Apr 23 18:13:10.742490 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:10.742393 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-0065f-6c7bbccfdc-477lq" event={"ID":"93c16c6f-faad-4314-abe0-0789a35a76bb","Type":"ContainerDied","Data":"88eda1aa1d5d3679a7eb7eee1da7700ae6b8f66d040be1812562760fc8fc907e"} Apr 23 18:13:10.742490 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:10.742424 2572 scope.go:117] "RemoveContainer" containerID="e9d57e9a31fb48e744931d130bcfb83bae984cab11c529f0dcbffe89bca7159d" Apr 23 18:13:10.749989 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:10.749966 2572 scope.go:117] "RemoveContainer" containerID="e9d57e9a31fb48e744931d130bcfb83bae984cab11c529f0dcbffe89bca7159d" Apr 23 18:13:10.750261 ip-10-0-133-178 kubenswrapper[2572]: E0423 18:13:10.750239 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9d57e9a31fb48e744931d130bcfb83bae984cab11c529f0dcbffe89bca7159d\": container with ID starting with e9d57e9a31fb48e744931d130bcfb83bae984cab11c529f0dcbffe89bca7159d not found: ID does not exist" containerID="e9d57e9a31fb48e744931d130bcfb83bae984cab11c529f0dcbffe89bca7159d" Apr 23 18:13:10.750330 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:10.750271 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9d57e9a31fb48e744931d130bcfb83bae984cab11c529f0dcbffe89bca7159d"} err="failed to get container status \"e9d57e9a31fb48e744931d130bcfb83bae984cab11c529f0dcbffe89bca7159d\": rpc error: code = NotFound desc = could not find container \"e9d57e9a31fb48e744931d130bcfb83bae984cab11c529f0dcbffe89bca7159d\": container with ID starting with e9d57e9a31fb48e744931d130bcfb83bae984cab11c529f0dcbffe89bca7159d not found: ID does not exist" Apr 23 18:13:10.831727 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:10.831687 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/93c16c6f-faad-4314-abe0-0789a35a76bb-proxy-tls\") pod \"93c16c6f-faad-4314-abe0-0789a35a76bb\" (UID: \"93c16c6f-faad-4314-abe0-0789a35a76bb\") " Apr 23 18:13:10.831904 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:10.831738 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93c16c6f-faad-4314-abe0-0789a35a76bb-openshift-service-ca-bundle\") pod \"93c16c6f-faad-4314-abe0-0789a35a76bb\" (UID: \"93c16c6f-faad-4314-abe0-0789a35a76bb\") " Apr 23 18:13:10.832190 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:10.832144 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93c16c6f-faad-4314-abe0-0789a35a76bb-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "93c16c6f-faad-4314-abe0-0789a35a76bb" (UID: "93c16c6f-faad-4314-abe0-0789a35a76bb"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:13:10.833870 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:10.833847 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c16c6f-faad-4314-abe0-0789a35a76bb-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "93c16c6f-faad-4314-abe0-0789a35a76bb" (UID: "93c16c6f-faad-4314-abe0-0789a35a76bb"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:13:10.841773 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:10.841752 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-226e9-596665bfbf-p64qh"] Apr 23 18:13:10.841970 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:10.841950 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/ensemble-graph-226e9-596665bfbf-p64qh" podUID="5fe1e4c4-4461-4367-9e98-75a1e61bc178" containerName="ensemble-graph-226e9" containerID="cri-o://d98089537cb4f98af3e3e14f35d3b7260df779da8616c5b222fe2c49f7e22af8" gracePeriod=30 Apr 23 18:13:10.933053 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:10.932984 2572 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/93c16c6f-faad-4314-abe0-0789a35a76bb-proxy-tls\") on node \"ip-10-0-133-178.ec2.internal\" DevicePath \"\"" Apr 23 18:13:10.933053 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:10.933011 2572 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93c16c6f-faad-4314-abe0-0789a35a76bb-openshift-service-ca-bundle\") on node \"ip-10-0-133-178.ec2.internal\" DevicePath \"\"" Apr 23 18:13:11.070992 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:11.070960 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-0065f-6c7bbccfdc-477lq"] Apr 23 18:13:11.077254 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:11.077227 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-0065f-6c7bbccfdc-477lq"] Apr 23 18:13:11.417940 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:11.417907 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93c16c6f-faad-4314-abe0-0789a35a76bb" path="/var/lib/kubelet/pods/93c16c6f-faad-4314-abe0-0789a35a76bb/volumes" Apr 23 18:13:13.727333 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:13.727295 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-226e9-596665bfbf-p64qh" podUID="5fe1e4c4-4461-4367-9e98-75a1e61bc178" containerName="ensemble-graph-226e9" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:13:18.727490 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:18.727444 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-226e9-596665bfbf-p64qh" podUID="5fe1e4c4-4461-4367-9e98-75a1e61bc178" containerName="ensemble-graph-226e9" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:13:23.726916 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:23.726874 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-226e9-596665bfbf-p64qh" podUID="5fe1e4c4-4461-4367-9e98-75a1e61bc178" containerName="ensemble-graph-226e9" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:13:23.727378 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:23.726970 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/ensemble-graph-226e9-596665bfbf-p64qh" Apr 23 18:13:28.727369 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:28.727325 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-226e9-596665bfbf-p64qh" podUID="5fe1e4c4-4461-4367-9e98-75a1e61bc178" containerName="ensemble-graph-226e9" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:13:33.727180 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:33.727134 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-226e9-596665bfbf-p64qh" podUID="5fe1e4c4-4461-4367-9e98-75a1e61bc178" containerName="ensemble-graph-226e9" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:13:38.727720 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:38.727672 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-226e9-596665bfbf-p64qh" podUID="5fe1e4c4-4461-4367-9e98-75a1e61bc178" containerName="ensemble-graph-226e9" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:13:40.969776 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:40.969755 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-226e9-596665bfbf-p64qh" Apr 23 18:13:41.057098 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:41.057067 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5fe1e4c4-4461-4367-9e98-75a1e61bc178-proxy-tls\") pod \"5fe1e4c4-4461-4367-9e98-75a1e61bc178\" (UID: \"5fe1e4c4-4461-4367-9e98-75a1e61bc178\") " Apr 23 18:13:41.057299 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:41.057173 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fe1e4c4-4461-4367-9e98-75a1e61bc178-openshift-service-ca-bundle\") pod \"5fe1e4c4-4461-4367-9e98-75a1e61bc178\" (UID: \"5fe1e4c4-4461-4367-9e98-75a1e61bc178\") " Apr 23 18:13:41.057585 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:41.057550 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fe1e4c4-4461-4367-9e98-75a1e61bc178-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "5fe1e4c4-4461-4367-9e98-75a1e61bc178" (UID: "5fe1e4c4-4461-4367-9e98-75a1e61bc178"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:13:41.059133 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:41.059116 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe1e4c4-4461-4367-9e98-75a1e61bc178-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "5fe1e4c4-4461-4367-9e98-75a1e61bc178" (UID: "5fe1e4c4-4461-4367-9e98-75a1e61bc178"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:13:41.158566 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:41.158467 2572 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5fe1e4c4-4461-4367-9e98-75a1e61bc178-proxy-tls\") on node \"ip-10-0-133-178.ec2.internal\" DevicePath \"\"" Apr 23 18:13:41.158566 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:41.158513 2572 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fe1e4c4-4461-4367-9e98-75a1e61bc178-openshift-service-ca-bundle\") on node \"ip-10-0-133-178.ec2.internal\" DevicePath \"\"" Apr 23 18:13:41.830555 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:41.830521 2572 generic.go:358] "Generic (PLEG): container finished" podID="5fe1e4c4-4461-4367-9e98-75a1e61bc178" containerID="d98089537cb4f98af3e3e14f35d3b7260df779da8616c5b222fe2c49f7e22af8" exitCode=0 Apr 23 18:13:41.830724 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:41.830561 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-226e9-596665bfbf-p64qh" event={"ID":"5fe1e4c4-4461-4367-9e98-75a1e61bc178","Type":"ContainerDied","Data":"d98089537cb4f98af3e3e14f35d3b7260df779da8616c5b222fe2c49f7e22af8"} Apr 23 18:13:41.830724 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:41.830582 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-226e9-596665bfbf-p64qh" event={"ID":"5fe1e4c4-4461-4367-9e98-75a1e61bc178","Type":"ContainerDied","Data":"3c1927259818af4b50edf2baf556c1fbba3ef89b54331e0937b5d85ff5fc2985"} Apr 23 18:13:41.830724 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:41.830606 2572 scope.go:117] "RemoveContainer" containerID="d98089537cb4f98af3e3e14f35d3b7260df779da8616c5b222fe2c49f7e22af8" Apr 23 18:13:41.830724 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:41.830613 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-226e9-596665bfbf-p64qh" Apr 23 18:13:41.840065 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:41.840047 2572 scope.go:117] "RemoveContainer" containerID="d98089537cb4f98af3e3e14f35d3b7260df779da8616c5b222fe2c49f7e22af8" Apr 23 18:13:41.840328 ip-10-0-133-178 kubenswrapper[2572]: E0423 18:13:41.840308 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d98089537cb4f98af3e3e14f35d3b7260df779da8616c5b222fe2c49f7e22af8\": container with ID starting with d98089537cb4f98af3e3e14f35d3b7260df779da8616c5b222fe2c49f7e22af8 not found: ID does not exist" containerID="d98089537cb4f98af3e3e14f35d3b7260df779da8616c5b222fe2c49f7e22af8" Apr 23 18:13:41.840413 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:41.840335 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d98089537cb4f98af3e3e14f35d3b7260df779da8616c5b222fe2c49f7e22af8"} err="failed to get container status \"d98089537cb4f98af3e3e14f35d3b7260df779da8616c5b222fe2c49f7e22af8\": rpc error: code = NotFound desc = could not find container \"d98089537cb4f98af3e3e14f35d3b7260df779da8616c5b222fe2c49f7e22af8\": container with ID starting with d98089537cb4f98af3e3e14f35d3b7260df779da8616c5b222fe2c49f7e22af8 not found: ID does not exist" Apr 23 18:13:41.852883 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:41.852859 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-226e9-596665bfbf-p64qh"] Apr 23 18:13:41.857334 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:41.857313 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-226e9-596665bfbf-p64qh"] Apr 23 18:13:43.418215 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:43.418180 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe1e4c4-4461-4367-9e98-75a1e61bc178" path="/var/lib/kubelet/pods/5fe1e4c4-4461-4367-9e98-75a1e61bc178/volumes" Apr 23 18:13:50.776479 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:50.776441 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/sequence-graph-d864a-6c6d7bb99-5j4hm"] Apr 23 18:13:50.778651 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:50.776707 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="93c16c6f-faad-4314-abe0-0789a35a76bb" containerName="sequence-graph-0065f" Apr 23 18:13:50.778651 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:50.776718 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c16c6f-faad-4314-abe0-0789a35a76bb" containerName="sequence-graph-0065f" Apr 23 18:13:50.778651 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:50.776735 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5fe1e4c4-4461-4367-9e98-75a1e61bc178" containerName="ensemble-graph-226e9" Apr 23 18:13:50.778651 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:50.776741 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fe1e4c4-4461-4367-9e98-75a1e61bc178" containerName="ensemble-graph-226e9" Apr 23 18:13:50.778651 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:50.776789 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="93c16c6f-faad-4314-abe0-0789a35a76bb" containerName="sequence-graph-0065f" Apr 23 18:13:50.778651 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:50.776801 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="5fe1e4c4-4461-4367-9e98-75a1e61bc178" containerName="ensemble-graph-226e9" Apr 23 18:13:50.779523 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:50.779508 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-d864a-6c6d7bb99-5j4hm" Apr 23 18:13:50.781632 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:50.781603 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"openshift-service-ca.crt\"" Apr 23 18:13:50.781632 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:50.781616 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"sequence-graph-d864a-serving-cert\"" Apr 23 18:13:50.781806 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:50.781678 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"sequence-graph-d864a-kube-rbac-proxy-sar-config\"" Apr 23 18:13:50.781806 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:50.781703 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"default-dockercfg-jv9tx\"" Apr 23 18:13:50.789697 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:50.789675 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-d864a-6c6d7bb99-5j4hm"] Apr 23 18:13:50.931338 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:50.931308 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/379de24a-69bc-42c7-8e6e-bcef40a7af97-openshift-service-ca-bundle\") pod \"sequence-graph-d864a-6c6d7bb99-5j4hm\" (UID: \"379de24a-69bc-42c7-8e6e-bcef40a7af97\") " pod="kserve-ci-e2e-test/sequence-graph-d864a-6c6d7bb99-5j4hm" Apr 23 18:13:50.931524 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:50.931349 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/379de24a-69bc-42c7-8e6e-bcef40a7af97-proxy-tls\") pod \"sequence-graph-d864a-6c6d7bb99-5j4hm\" (UID: \"379de24a-69bc-42c7-8e6e-bcef40a7af97\") " pod="kserve-ci-e2e-test/sequence-graph-d864a-6c6d7bb99-5j4hm" Apr 23 18:13:51.032597 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:51.032507 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/379de24a-69bc-42c7-8e6e-bcef40a7af97-openshift-service-ca-bundle\") pod \"sequence-graph-d864a-6c6d7bb99-5j4hm\" (UID: \"379de24a-69bc-42c7-8e6e-bcef40a7af97\") " pod="kserve-ci-e2e-test/sequence-graph-d864a-6c6d7bb99-5j4hm" Apr 23 18:13:51.032597 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:51.032577 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/379de24a-69bc-42c7-8e6e-bcef40a7af97-proxy-tls\") pod \"sequence-graph-d864a-6c6d7bb99-5j4hm\" (UID: \"379de24a-69bc-42c7-8e6e-bcef40a7af97\") " pod="kserve-ci-e2e-test/sequence-graph-d864a-6c6d7bb99-5j4hm" Apr 23 18:13:51.032750 ip-10-0-133-178 kubenswrapper[2572]: E0423 18:13:51.032679 2572 secret.go:189] Couldn't get secret kserve-ci-e2e-test/sequence-graph-d864a-serving-cert: secret "sequence-graph-d864a-serving-cert" not found Apr 23 18:13:51.032750 ip-10-0-133-178 kubenswrapper[2572]: E0423 18:13:51.032746 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/379de24a-69bc-42c7-8e6e-bcef40a7af97-proxy-tls podName:379de24a-69bc-42c7-8e6e-bcef40a7af97 nodeName:}" failed. No retries permitted until 2026-04-23 18:13:51.532727781 +0000 UTC m=+1306.727632493 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/379de24a-69bc-42c7-8e6e-bcef40a7af97-proxy-tls") pod "sequence-graph-d864a-6c6d7bb99-5j4hm" (UID: "379de24a-69bc-42c7-8e6e-bcef40a7af97") : secret "sequence-graph-d864a-serving-cert" not found Apr 23 18:13:51.033163 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:51.033143 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/379de24a-69bc-42c7-8e6e-bcef40a7af97-openshift-service-ca-bundle\") pod \"sequence-graph-d864a-6c6d7bb99-5j4hm\" (UID: \"379de24a-69bc-42c7-8e6e-bcef40a7af97\") " pod="kserve-ci-e2e-test/sequence-graph-d864a-6c6d7bb99-5j4hm" Apr 23 18:13:51.537271 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:51.537221 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/379de24a-69bc-42c7-8e6e-bcef40a7af97-proxy-tls\") pod \"sequence-graph-d864a-6c6d7bb99-5j4hm\" (UID: \"379de24a-69bc-42c7-8e6e-bcef40a7af97\") " pod="kserve-ci-e2e-test/sequence-graph-d864a-6c6d7bb99-5j4hm" Apr 23 18:13:51.539689 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:51.539664 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/379de24a-69bc-42c7-8e6e-bcef40a7af97-proxy-tls\") pod \"sequence-graph-d864a-6c6d7bb99-5j4hm\" (UID: \"379de24a-69bc-42c7-8e6e-bcef40a7af97\") " pod="kserve-ci-e2e-test/sequence-graph-d864a-6c6d7bb99-5j4hm" Apr 23 18:13:51.689075 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:51.689034 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-d864a-6c6d7bb99-5j4hm" Apr 23 18:13:51.813920 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:51.813835 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-d864a-6c6d7bb99-5j4hm"] Apr 23 18:13:51.816691 ip-10-0-133-178 kubenswrapper[2572]: W0423 18:13:51.816665 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod379de24a_69bc_42c7_8e6e_bcef40a7af97.slice/crio-cb482064a2795671bdc410cef62c73336c136f826c583f51e56e061cac138ec4 WatchSource:0}: Error finding container cb482064a2795671bdc410cef62c73336c136f826c583f51e56e061cac138ec4: Status 404 returned error can't find the container with id cb482064a2795671bdc410cef62c73336c136f826c583f51e56e061cac138ec4 Apr 23 18:13:51.857715 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:51.857679 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-d864a-6c6d7bb99-5j4hm" event={"ID":"379de24a-69bc-42c7-8e6e-bcef40a7af97","Type":"ContainerStarted","Data":"cb482064a2795671bdc410cef62c73336c136f826c583f51e56e061cac138ec4"} Apr 23 18:13:52.861645 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:52.861608 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-d864a-6c6d7bb99-5j4hm" event={"ID":"379de24a-69bc-42c7-8e6e-bcef40a7af97","Type":"ContainerStarted","Data":"dd29c1b044d8e3fde2b8061594f13d8d6d1f9dfbc69d6637bada31c57bb2fbc5"} Apr 23 18:13:52.862025 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:52.861753 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/sequence-graph-d864a-6c6d7bb99-5j4hm" Apr 23 18:13:52.887392 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:52.887344 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/sequence-graph-d864a-6c6d7bb99-5j4hm" podStartSLOduration=2.887330269 podStartE2EDuration="2.887330269s" podCreationTimestamp="2026-04-23 18:13:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:13:52.885788755 +0000 UTC m=+1308.080693481" watchObservedRunningTime="2026-04-23 18:13:52.887330269 +0000 UTC m=+1308.082235011" Apr 23 18:13:58.870517 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:13:58.870482 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/sequence-graph-d864a-6c6d7bb99-5j4hm" Apr 23 18:14:00.828730 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:00.828695 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-d864a-6c6d7bb99-5j4hm"] Apr 23 18:14:00.829120 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:00.828899 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/sequence-graph-d864a-6c6d7bb99-5j4hm" podUID="379de24a-69bc-42c7-8e6e-bcef40a7af97" containerName="sequence-graph-d864a" containerID="cri-o://dd29c1b044d8e3fde2b8061594f13d8d6d1f9dfbc69d6637bada31c57bb2fbc5" gracePeriod=30 Apr 23 18:14:03.868802 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:03.868765 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-d864a-6c6d7bb99-5j4hm" podUID="379de24a-69bc-42c7-8e6e-bcef40a7af97" containerName="sequence-graph-d864a" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:14:08.868617 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:08.868573 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-d864a-6c6d7bb99-5j4hm" podUID="379de24a-69bc-42c7-8e6e-bcef40a7af97" containerName="sequence-graph-d864a" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:14:11.068634 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:11.068601 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-5c6fb-d586947bb-nqqrr"] Apr 23 18:14:11.073113 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:11.073095 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-5c6fb-d586947bb-nqqrr" Apr 23 18:14:11.077015 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:11.076986 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"ensemble-graph-5c6fb-kube-rbac-proxy-sar-config\"" Apr 23 18:14:11.077015 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:11.077007 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"ensemble-graph-5c6fb-serving-cert\"" Apr 23 18:14:11.081237 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:11.081216 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6afaa16e-b778-4ab1-89c7-155606b721da-proxy-tls\") pod \"ensemble-graph-5c6fb-d586947bb-nqqrr\" (UID: \"6afaa16e-b778-4ab1-89c7-155606b721da\") " pod="kserve-ci-e2e-test/ensemble-graph-5c6fb-d586947bb-nqqrr" Apr 23 18:14:11.081347 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:11.081248 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6afaa16e-b778-4ab1-89c7-155606b721da-openshift-service-ca-bundle\") pod \"ensemble-graph-5c6fb-d586947bb-nqqrr\" (UID: \"6afaa16e-b778-4ab1-89c7-155606b721da\") " pod="kserve-ci-e2e-test/ensemble-graph-5c6fb-d586947bb-nqqrr" Apr 23 18:14:11.084015 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:11.083993 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-5c6fb-d586947bb-nqqrr"] Apr 23 18:14:11.182528 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:11.182496 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6afaa16e-b778-4ab1-89c7-155606b721da-proxy-tls\") pod \"ensemble-graph-5c6fb-d586947bb-nqqrr\" (UID: \"6afaa16e-b778-4ab1-89c7-155606b721da\") " pod="kserve-ci-e2e-test/ensemble-graph-5c6fb-d586947bb-nqqrr" Apr 23 18:14:11.182528 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:11.182535 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6afaa16e-b778-4ab1-89c7-155606b721da-openshift-service-ca-bundle\") pod \"ensemble-graph-5c6fb-d586947bb-nqqrr\" (UID: \"6afaa16e-b778-4ab1-89c7-155606b721da\") " pod="kserve-ci-e2e-test/ensemble-graph-5c6fb-d586947bb-nqqrr" Apr 23 18:14:11.182742 ip-10-0-133-178 kubenswrapper[2572]: E0423 18:14:11.182650 2572 secret.go:189] Couldn't get secret kserve-ci-e2e-test/ensemble-graph-5c6fb-serving-cert: secret "ensemble-graph-5c6fb-serving-cert" not found Apr 23 18:14:11.182742 ip-10-0-133-178 kubenswrapper[2572]: E0423 18:14:11.182717 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6afaa16e-b778-4ab1-89c7-155606b721da-proxy-tls podName:6afaa16e-b778-4ab1-89c7-155606b721da nodeName:}" failed. No retries permitted until 2026-04-23 18:14:11.68269823 +0000 UTC m=+1326.877602938 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/6afaa16e-b778-4ab1-89c7-155606b721da-proxy-tls") pod "ensemble-graph-5c6fb-d586947bb-nqqrr" (UID: "6afaa16e-b778-4ab1-89c7-155606b721da") : secret "ensemble-graph-5c6fb-serving-cert" not found Apr 23 18:14:11.183115 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:11.183099 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6afaa16e-b778-4ab1-89c7-155606b721da-openshift-service-ca-bundle\") pod \"ensemble-graph-5c6fb-d586947bb-nqqrr\" (UID: \"6afaa16e-b778-4ab1-89c7-155606b721da\") " pod="kserve-ci-e2e-test/ensemble-graph-5c6fb-d586947bb-nqqrr" Apr 23 18:14:11.686754 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:11.686700 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6afaa16e-b778-4ab1-89c7-155606b721da-proxy-tls\") pod \"ensemble-graph-5c6fb-d586947bb-nqqrr\" (UID: \"6afaa16e-b778-4ab1-89c7-155606b721da\") " pod="kserve-ci-e2e-test/ensemble-graph-5c6fb-d586947bb-nqqrr" Apr 23 18:14:11.689136 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:11.689116 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6afaa16e-b778-4ab1-89c7-155606b721da-proxy-tls\") pod \"ensemble-graph-5c6fb-d586947bb-nqqrr\" (UID: \"6afaa16e-b778-4ab1-89c7-155606b721da\") " pod="kserve-ci-e2e-test/ensemble-graph-5c6fb-d586947bb-nqqrr" Apr 23 18:14:11.982539 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:11.982513 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-5c6fb-d586947bb-nqqrr" Apr 23 18:14:12.100377 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:12.100346 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-5c6fb-d586947bb-nqqrr"] Apr 23 18:14:12.915104 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:12.915067 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-5c6fb-d586947bb-nqqrr" event={"ID":"6afaa16e-b778-4ab1-89c7-155606b721da","Type":"ContainerStarted","Data":"fb08793186ef9c3b04d39baf8cabd741d1ca6abd86ac09a7a630caa0eeb0179b"} Apr 23 18:14:12.915104 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:12.915108 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-5c6fb-d586947bb-nqqrr" event={"ID":"6afaa16e-b778-4ab1-89c7-155606b721da","Type":"ContainerStarted","Data":"fcbbf6ef30a31523219f88a8f719806c6dd57b7a3a0800ec6d72fb75a323c57a"} Apr 23 18:14:12.915341 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:12.915223 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/ensemble-graph-5c6fb-d586947bb-nqqrr" Apr 23 18:14:12.935873 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:12.935830 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/ensemble-graph-5c6fb-d586947bb-nqqrr" podStartSLOduration=1.9358188840000001 podStartE2EDuration="1.935818884s" podCreationTimestamp="2026-04-23 18:14:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:14:12.934579335 +0000 UTC m=+1328.129484063" watchObservedRunningTime="2026-04-23 18:14:12.935818884 +0000 UTC m=+1328.130723611" Apr 23 18:14:13.868696 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:13.868656 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-d864a-6c6d7bb99-5j4hm" podUID="379de24a-69bc-42c7-8e6e-bcef40a7af97" containerName="sequence-graph-d864a" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:14:13.869131 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:13.868780 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/sequence-graph-d864a-6c6d7bb99-5j4hm" Apr 23 18:14:18.869475 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:18.869365 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-d864a-6c6d7bb99-5j4hm" podUID="379de24a-69bc-42c7-8e6e-bcef40a7af97" containerName="sequence-graph-d864a" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:14:18.922972 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:18.922949 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/ensemble-graph-5c6fb-d586947bb-nqqrr" Apr 23 18:14:23.868590 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:23.868547 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-d864a-6c6d7bb99-5j4hm" podUID="379de24a-69bc-42c7-8e6e-bcef40a7af97" containerName="sequence-graph-d864a" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:14:28.868753 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:28.868713 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-d864a-6c6d7bb99-5j4hm" podUID="379de24a-69bc-42c7-8e6e-bcef40a7af97" containerName="sequence-graph-d864a" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:14:30.848646 ip-10-0-133-178 kubenswrapper[2572]: E0423 18:14:30.848608 2572 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod379de24a_69bc_42c7_8e6e_bcef40a7af97.slice/crio-cb482064a2795671bdc410cef62c73336c136f826c583f51e56e061cac138ec4\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod379de24a_69bc_42c7_8e6e_bcef40a7af97.slice/crio-dd29c1b044d8e3fde2b8061594f13d8d6d1f9dfbc69d6637bada31c57bb2fbc5.scope\": RecentStats: unable to find data in memory cache]" Apr 23 18:14:30.849024 ip-10-0-133-178 kubenswrapper[2572]: E0423 18:14:30.848621 2572 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod379de24a_69bc_42c7_8e6e_bcef40a7af97.slice/crio-conmon-dd29c1b044d8e3fde2b8061594f13d8d6d1f9dfbc69d6637bada31c57bb2fbc5.scope\": RecentStats: unable to find data in memory cache]" Apr 23 18:14:30.849024 ip-10-0-133-178 kubenswrapper[2572]: E0423 18:14:30.848729 2572 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod379de24a_69bc_42c7_8e6e_bcef40a7af97.slice/crio-dd29c1b044d8e3fde2b8061594f13d8d6d1f9dfbc69d6637bada31c57bb2fbc5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod379de24a_69bc_42c7_8e6e_bcef40a7af97.slice/crio-conmon-dd29c1b044d8e3fde2b8061594f13d8d6d1f9dfbc69d6637bada31c57bb2fbc5.scope\": RecentStats: unable to find data in memory cache]" Apr 23 18:14:30.961732 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:30.961699 2572 generic.go:358] "Generic (PLEG): container finished" podID="379de24a-69bc-42c7-8e6e-bcef40a7af97" containerID="dd29c1b044d8e3fde2b8061594f13d8d6d1f9dfbc69d6637bada31c57bb2fbc5" exitCode=0 Apr 23 18:14:30.961885 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:30.961747 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-d864a-6c6d7bb99-5j4hm" event={"ID":"379de24a-69bc-42c7-8e6e-bcef40a7af97","Type":"ContainerDied","Data":"dd29c1b044d8e3fde2b8061594f13d8d6d1f9dfbc69d6637bada31c57bb2fbc5"} Apr 23 18:14:30.981920 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:30.981900 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-d864a-6c6d7bb99-5j4hm" Apr 23 18:14:31.029935 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:31.029903 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/379de24a-69bc-42c7-8e6e-bcef40a7af97-openshift-service-ca-bundle\") pod \"379de24a-69bc-42c7-8e6e-bcef40a7af97\" (UID: \"379de24a-69bc-42c7-8e6e-bcef40a7af97\") " Apr 23 18:14:31.030105 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:31.029963 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/379de24a-69bc-42c7-8e6e-bcef40a7af97-proxy-tls\") pod \"379de24a-69bc-42c7-8e6e-bcef40a7af97\" (UID: \"379de24a-69bc-42c7-8e6e-bcef40a7af97\") " Apr 23 18:14:31.030286 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:31.030260 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/379de24a-69bc-42c7-8e6e-bcef40a7af97-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "379de24a-69bc-42c7-8e6e-bcef40a7af97" (UID: "379de24a-69bc-42c7-8e6e-bcef40a7af97"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:14:31.031957 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:31.031932 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/379de24a-69bc-42c7-8e6e-bcef40a7af97-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "379de24a-69bc-42c7-8e6e-bcef40a7af97" (UID: "379de24a-69bc-42c7-8e6e-bcef40a7af97"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:14:31.130619 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:31.130539 2572 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/379de24a-69bc-42c7-8e6e-bcef40a7af97-proxy-tls\") on node \"ip-10-0-133-178.ec2.internal\" DevicePath \"\"" Apr 23 18:14:31.130619 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:31.130569 2572 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/379de24a-69bc-42c7-8e6e-bcef40a7af97-openshift-service-ca-bundle\") on node \"ip-10-0-133-178.ec2.internal\" DevicePath \"\"" Apr 23 18:14:31.965081 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:31.965041 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-d864a-6c6d7bb99-5j4hm" event={"ID":"379de24a-69bc-42c7-8e6e-bcef40a7af97","Type":"ContainerDied","Data":"cb482064a2795671bdc410cef62c73336c136f826c583f51e56e061cac138ec4"} Apr 23 18:14:31.965522 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:31.965098 2572 scope.go:117] "RemoveContainer" containerID="dd29c1b044d8e3fde2b8061594f13d8d6d1f9dfbc69d6637bada31c57bb2fbc5" Apr 23 18:14:31.965522 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:31.965059 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-d864a-6c6d7bb99-5j4hm" Apr 23 18:14:31.982221 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:31.982194 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-d864a-6c6d7bb99-5j4hm"] Apr 23 18:14:31.986823 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:31.986801 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-d864a-6c6d7bb99-5j4hm"] Apr 23 18:14:33.418094 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:14:33.418058 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="379de24a-69bc-42c7-8e6e-bcef40a7af97" path="/var/lib/kubelet/pods/379de24a-69bc-42c7-8e6e-bcef40a7af97/volumes" Apr 23 18:15:01.046734 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:15:01.046696 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/sequence-graph-e1fac-84bd497c8-5xwfd"] Apr 23 18:15:01.047219 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:15:01.046969 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="379de24a-69bc-42c7-8e6e-bcef40a7af97" containerName="sequence-graph-d864a" Apr 23 18:15:01.047219 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:15:01.046980 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="379de24a-69bc-42c7-8e6e-bcef40a7af97" containerName="sequence-graph-d864a" Apr 23 18:15:01.047219 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:15:01.047027 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="379de24a-69bc-42c7-8e6e-bcef40a7af97" containerName="sequence-graph-d864a" Apr 23 18:15:01.049621 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:15:01.049604 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-e1fac-84bd497c8-5xwfd" Apr 23 18:15:01.051414 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:15:01.051384 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"sequence-graph-e1fac-serving-cert\"" Apr 23 18:15:01.051511 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:15:01.051473 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"sequence-graph-e1fac-kube-rbac-proxy-sar-config\"" Apr 23 18:15:01.057448 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:15:01.057426 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-e1fac-84bd497c8-5xwfd"] Apr 23 18:15:01.167127 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:15:01.167085 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df6da3d-538a-4c59-8197-289f88b09488-proxy-tls\") pod \"sequence-graph-e1fac-84bd497c8-5xwfd\" (UID: \"4df6da3d-538a-4c59-8197-289f88b09488\") " pod="kserve-ci-e2e-test/sequence-graph-e1fac-84bd497c8-5xwfd" Apr 23 18:15:01.167127 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:15:01.167129 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4df6da3d-538a-4c59-8197-289f88b09488-openshift-service-ca-bundle\") pod \"sequence-graph-e1fac-84bd497c8-5xwfd\" (UID: \"4df6da3d-538a-4c59-8197-289f88b09488\") " pod="kserve-ci-e2e-test/sequence-graph-e1fac-84bd497c8-5xwfd" Apr 23 18:15:01.267888 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:15:01.267854 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df6da3d-538a-4c59-8197-289f88b09488-proxy-tls\") pod \"sequence-graph-e1fac-84bd497c8-5xwfd\" (UID: \"4df6da3d-538a-4c59-8197-289f88b09488\") " pod="kserve-ci-e2e-test/sequence-graph-e1fac-84bd497c8-5xwfd" Apr 23 18:15:01.267888 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:15:01.267890 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4df6da3d-538a-4c59-8197-289f88b09488-openshift-service-ca-bundle\") pod \"sequence-graph-e1fac-84bd497c8-5xwfd\" (UID: \"4df6da3d-538a-4c59-8197-289f88b09488\") " pod="kserve-ci-e2e-test/sequence-graph-e1fac-84bd497c8-5xwfd" Apr 23 18:15:01.268101 ip-10-0-133-178 kubenswrapper[2572]: E0423 18:15:01.268010 2572 secret.go:189] Couldn't get secret kserve-ci-e2e-test/sequence-graph-e1fac-serving-cert: secret "sequence-graph-e1fac-serving-cert" not found Apr 23 18:15:01.268101 ip-10-0-133-178 kubenswrapper[2572]: E0423 18:15:01.268088 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4df6da3d-538a-4c59-8197-289f88b09488-proxy-tls podName:4df6da3d-538a-4c59-8197-289f88b09488 nodeName:}" failed. No retries permitted until 2026-04-23 18:15:01.768072704 +0000 UTC m=+1376.962977414 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/4df6da3d-538a-4c59-8197-289f88b09488-proxy-tls") pod "sequence-graph-e1fac-84bd497c8-5xwfd" (UID: "4df6da3d-538a-4c59-8197-289f88b09488") : secret "sequence-graph-e1fac-serving-cert" not found Apr 23 18:15:01.268546 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:15:01.268529 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4df6da3d-538a-4c59-8197-289f88b09488-openshift-service-ca-bundle\") pod \"sequence-graph-e1fac-84bd497c8-5xwfd\" (UID: \"4df6da3d-538a-4c59-8197-289f88b09488\") " pod="kserve-ci-e2e-test/sequence-graph-e1fac-84bd497c8-5xwfd" Apr 23 18:15:01.771871 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:15:01.771839 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df6da3d-538a-4c59-8197-289f88b09488-proxy-tls\") pod \"sequence-graph-e1fac-84bd497c8-5xwfd\" (UID: \"4df6da3d-538a-4c59-8197-289f88b09488\") " pod="kserve-ci-e2e-test/sequence-graph-e1fac-84bd497c8-5xwfd" Apr 23 18:15:01.774200 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:15:01.774178 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df6da3d-538a-4c59-8197-289f88b09488-proxy-tls\") pod \"sequence-graph-e1fac-84bd497c8-5xwfd\" (UID: \"4df6da3d-538a-4c59-8197-289f88b09488\") " pod="kserve-ci-e2e-test/sequence-graph-e1fac-84bd497c8-5xwfd" Apr 23 18:15:01.959701 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:15:01.959653 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-e1fac-84bd497c8-5xwfd" Apr 23 18:15:02.076426 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:15:02.076368 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-e1fac-84bd497c8-5xwfd"] Apr 23 18:15:02.079409 ip-10-0-133-178 kubenswrapper[2572]: W0423 18:15:02.079371 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4df6da3d_538a_4c59_8197_289f88b09488.slice/crio-9bdf09efc6915b529264e653f4c0ad020b783ec7e0b821f1a166fb1ab389cce3 WatchSource:0}: Error finding container 9bdf09efc6915b529264e653f4c0ad020b783ec7e0b821f1a166fb1ab389cce3: Status 404 returned error can't find the container with id 9bdf09efc6915b529264e653f4c0ad020b783ec7e0b821f1a166fb1ab389cce3 Apr 23 18:15:03.049592 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:15:03.049550 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-e1fac-84bd497c8-5xwfd" event={"ID":"4df6da3d-538a-4c59-8197-289f88b09488","Type":"ContainerStarted","Data":"d0c1fed16000f28890c531232c20cfe39c696558adbd86fdc23639801d97b6a2"} Apr 23 18:15:03.049592 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:15:03.049593 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-e1fac-84bd497c8-5xwfd" event={"ID":"4df6da3d-538a-4c59-8197-289f88b09488","Type":"ContainerStarted","Data":"9bdf09efc6915b529264e653f4c0ad020b783ec7e0b821f1a166fb1ab389cce3"} Apr 23 18:15:03.049805 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:15:03.049685 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/sequence-graph-e1fac-84bd497c8-5xwfd" Apr 23 18:15:03.066685 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:15:03.066637 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/sequence-graph-e1fac-84bd497c8-5xwfd" podStartSLOduration=2.066621643 podStartE2EDuration="2.066621643s" podCreationTimestamp="2026-04-23 18:15:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:15:03.065380119 +0000 UTC m=+1378.260284865" watchObservedRunningTime="2026-04-23 18:15:03.066621643 +0000 UTC m=+1378.261526372" Apr 23 18:15:09.058915 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:15:09.058880 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/sequence-graph-e1fac-84bd497c8-5xwfd" Apr 23 18:17:05.403205 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:17:05.403175 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9pcc_3c2da17f-0591-4850-9fa2-fde2a8c1a8d5/ovn-acl-logging/0.log" Apr 23 18:17:05.405460 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:17:05.405431 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9pcc_3c2da17f-0591-4850-9fa2-fde2a8c1a8d5/ovn-acl-logging/0.log" Apr 23 18:17:05.407363 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:17:05.407341 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_2e27a1d033408744b4b8c34c52f01b43/kube-rbac-proxy-crio/4.log" Apr 23 18:17:05.409650 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:17:05.409632 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_2e27a1d033408744b4b8c34c52f01b43/kube-rbac-proxy-crio/4.log" Apr 23 18:22:05.422593 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:22:05.422561 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9pcc_3c2da17f-0591-4850-9fa2-fde2a8c1a8d5/ovn-acl-logging/0.log" Apr 23 18:22:05.424873 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:22:05.424855 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9pcc_3c2da17f-0591-4850-9fa2-fde2a8c1a8d5/ovn-acl-logging/0.log" Apr 23 18:22:05.426810 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:22:05.426791 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_2e27a1d033408744b4b8c34c52f01b43/kube-rbac-proxy-crio/4.log" Apr 23 18:22:05.429016 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:22:05.428997 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_2e27a1d033408744b4b8c34c52f01b43/kube-rbac-proxy-crio/4.log" Apr 23 18:22:25.827636 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:22:25.827603 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-5c6fb-d586947bb-nqqrr"] Apr 23 18:22:25.828103 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:22:25.827841 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/ensemble-graph-5c6fb-d586947bb-nqqrr" podUID="6afaa16e-b778-4ab1-89c7-155606b721da" containerName="ensemble-graph-5c6fb" containerID="cri-o://fb08793186ef9c3b04d39baf8cabd741d1ca6abd86ac09a7a630caa0eeb0179b" gracePeriod=30 Apr 23 18:22:28.921498 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:22:28.921460 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-5c6fb-d586947bb-nqqrr" podUID="6afaa16e-b778-4ab1-89c7-155606b721da" containerName="ensemble-graph-5c6fb" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:22:33.921344 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:22:33.921304 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-5c6fb-d586947bb-nqqrr" podUID="6afaa16e-b778-4ab1-89c7-155606b721da" containerName="ensemble-graph-5c6fb" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:22:38.922159 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:22:38.922123 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-5c6fb-d586947bb-nqqrr" podUID="6afaa16e-b778-4ab1-89c7-155606b721da" containerName="ensemble-graph-5c6fb" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:22:38.922566 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:22:38.922245 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/ensemble-graph-5c6fb-d586947bb-nqqrr" Apr 23 18:22:43.921866 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:22:43.921827 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-5c6fb-d586947bb-nqqrr" podUID="6afaa16e-b778-4ab1-89c7-155606b721da" containerName="ensemble-graph-5c6fb" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:22:48.921293 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:22:48.921251 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-5c6fb-d586947bb-nqqrr" podUID="6afaa16e-b778-4ab1-89c7-155606b721da" containerName="ensemble-graph-5c6fb" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:22:53.921959 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:22:53.921917 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-5c6fb-d586947bb-nqqrr" podUID="6afaa16e-b778-4ab1-89c7-155606b721da" containerName="ensemble-graph-5c6fb" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:22:55.965872 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:22:55.965849 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-5c6fb-d586947bb-nqqrr" Apr 23 18:22:56.099889 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:22:56.099796 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6afaa16e-b778-4ab1-89c7-155606b721da-proxy-tls\") pod \"6afaa16e-b778-4ab1-89c7-155606b721da\" (UID: \"6afaa16e-b778-4ab1-89c7-155606b721da\") " Apr 23 18:22:56.099889 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:22:56.099866 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6afaa16e-b778-4ab1-89c7-155606b721da-openshift-service-ca-bundle\") pod \"6afaa16e-b778-4ab1-89c7-155606b721da\" (UID: \"6afaa16e-b778-4ab1-89c7-155606b721da\") " Apr 23 18:22:56.100280 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:22:56.100236 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6afaa16e-b778-4ab1-89c7-155606b721da-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "6afaa16e-b778-4ab1-89c7-155606b721da" (UID: "6afaa16e-b778-4ab1-89c7-155606b721da"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:22:56.101941 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:22:56.101915 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6afaa16e-b778-4ab1-89c7-155606b721da-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "6afaa16e-b778-4ab1-89c7-155606b721da" (UID: "6afaa16e-b778-4ab1-89c7-155606b721da"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:22:56.201105 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:22:56.201070 2572 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6afaa16e-b778-4ab1-89c7-155606b721da-proxy-tls\") on node \"ip-10-0-133-178.ec2.internal\" DevicePath \"\"" Apr 23 18:22:56.201105 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:22:56.201101 2572 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6afaa16e-b778-4ab1-89c7-155606b721da-openshift-service-ca-bundle\") on node \"ip-10-0-133-178.ec2.internal\" DevicePath \"\"" Apr 23 18:22:56.309839 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:22:56.309808 2572 generic.go:358] "Generic (PLEG): container finished" podID="6afaa16e-b778-4ab1-89c7-155606b721da" containerID="fb08793186ef9c3b04d39baf8cabd741d1ca6abd86ac09a7a630caa0eeb0179b" exitCode=0 Apr 23 18:22:56.310004 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:22:56.309849 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-5c6fb-d586947bb-nqqrr" event={"ID":"6afaa16e-b778-4ab1-89c7-155606b721da","Type":"ContainerDied","Data":"fb08793186ef9c3b04d39baf8cabd741d1ca6abd86ac09a7a630caa0eeb0179b"} Apr 23 18:22:56.310004 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:22:56.309869 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-5c6fb-d586947bb-nqqrr" Apr 23 18:22:56.310004 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:22:56.309884 2572 scope.go:117] "RemoveContainer" containerID="fb08793186ef9c3b04d39baf8cabd741d1ca6abd86ac09a7a630caa0eeb0179b" Apr 23 18:22:56.310004 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:22:56.309872 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-5c6fb-d586947bb-nqqrr" event={"ID":"6afaa16e-b778-4ab1-89c7-155606b721da","Type":"ContainerDied","Data":"fcbbf6ef30a31523219f88a8f719806c6dd57b7a3a0800ec6d72fb75a323c57a"} Apr 23 18:22:56.320040 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:22:56.319959 2572 scope.go:117] "RemoveContainer" containerID="fb08793186ef9c3b04d39baf8cabd741d1ca6abd86ac09a7a630caa0eeb0179b" Apr 23 18:22:56.320745 ip-10-0-133-178 kubenswrapper[2572]: E0423 18:22:56.320720 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb08793186ef9c3b04d39baf8cabd741d1ca6abd86ac09a7a630caa0eeb0179b\": container with ID starting with fb08793186ef9c3b04d39baf8cabd741d1ca6abd86ac09a7a630caa0eeb0179b not found: ID does not exist" containerID="fb08793186ef9c3b04d39baf8cabd741d1ca6abd86ac09a7a630caa0eeb0179b" Apr 23 18:22:56.320849 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:22:56.320752 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb08793186ef9c3b04d39baf8cabd741d1ca6abd86ac09a7a630caa0eeb0179b"} err="failed to get container status \"fb08793186ef9c3b04d39baf8cabd741d1ca6abd86ac09a7a630caa0eeb0179b\": rpc error: code = NotFound desc = could not find container \"fb08793186ef9c3b04d39baf8cabd741d1ca6abd86ac09a7a630caa0eeb0179b\": container with ID starting with fb08793186ef9c3b04d39baf8cabd741d1ca6abd86ac09a7a630caa0eeb0179b not found: ID does not exist" Apr 23 18:22:56.332887 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:22:56.332864 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-5c6fb-d586947bb-nqqrr"] Apr 23 18:22:56.337707 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:22:56.337685 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-5c6fb-d586947bb-nqqrr"] Apr 23 18:22:57.417910 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:22:57.417873 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6afaa16e-b778-4ab1-89c7-155606b721da" path="/var/lib/kubelet/pods/6afaa16e-b778-4ab1-89c7-155606b721da/volumes" Apr 23 18:23:15.666055 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:15.665975 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-e1fac-84bd497c8-5xwfd"] Apr 23 18:23:15.666512 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:15.666280 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/sequence-graph-e1fac-84bd497c8-5xwfd" podUID="4df6da3d-538a-4c59-8197-289f88b09488" containerName="sequence-graph-e1fac" containerID="cri-o://d0c1fed16000f28890c531232c20cfe39c696558adbd86fdc23639801d97b6a2" gracePeriod=30 Apr 23 18:23:19.056369 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:19.056334 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-e1fac-84bd497c8-5xwfd" podUID="4df6da3d-538a-4c59-8197-289f88b09488" containerName="sequence-graph-e1fac" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:23:24.057244 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:24.057203 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-e1fac-84bd497c8-5xwfd" podUID="4df6da3d-538a-4c59-8197-289f88b09488" containerName="sequence-graph-e1fac" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:23:26.062709 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:26.062669 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/splitter-graph-a93f3-c7fd4dc8d-r2jt5"] Apr 23 18:23:26.063099 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:26.062921 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6afaa16e-b778-4ab1-89c7-155606b721da" containerName="ensemble-graph-5c6fb" Apr 23 18:23:26.063099 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:26.062932 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="6afaa16e-b778-4ab1-89c7-155606b721da" containerName="ensemble-graph-5c6fb" Apr 23 18:23:26.063099 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:26.062988 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="6afaa16e-b778-4ab1-89c7-155606b721da" containerName="ensemble-graph-5c6fb" Apr 23 18:23:26.067031 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:26.067014 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-a93f3-c7fd4dc8d-r2jt5" Apr 23 18:23:26.068901 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:26.068875 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"splitter-graph-a93f3-serving-cert\"" Apr 23 18:23:26.068901 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:26.068894 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"splitter-graph-a93f3-kube-rbac-proxy-sar-config\"" Apr 23 18:23:26.075078 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:26.075057 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-a93f3-c7fd4dc8d-r2jt5"] Apr 23 18:23:26.227170 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:26.227118 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fca9923-b6f0-43a8-8db9-469992c798e0-openshift-service-ca-bundle\") pod \"splitter-graph-a93f3-c7fd4dc8d-r2jt5\" (UID: \"0fca9923-b6f0-43a8-8db9-469992c798e0\") " pod="kserve-ci-e2e-test/splitter-graph-a93f3-c7fd4dc8d-r2jt5" Apr 23 18:23:26.227170 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:26.227178 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0fca9923-b6f0-43a8-8db9-469992c798e0-proxy-tls\") pod \"splitter-graph-a93f3-c7fd4dc8d-r2jt5\" (UID: \"0fca9923-b6f0-43a8-8db9-469992c798e0\") " pod="kserve-ci-e2e-test/splitter-graph-a93f3-c7fd4dc8d-r2jt5" Apr 23 18:23:26.327660 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:26.327523 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fca9923-b6f0-43a8-8db9-469992c798e0-openshift-service-ca-bundle\") pod \"splitter-graph-a93f3-c7fd4dc8d-r2jt5\" (UID: \"0fca9923-b6f0-43a8-8db9-469992c798e0\") " pod="kserve-ci-e2e-test/splitter-graph-a93f3-c7fd4dc8d-r2jt5" Apr 23 18:23:26.327660 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:26.327573 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0fca9923-b6f0-43a8-8db9-469992c798e0-proxy-tls\") pod \"splitter-graph-a93f3-c7fd4dc8d-r2jt5\" (UID: \"0fca9923-b6f0-43a8-8db9-469992c798e0\") " pod="kserve-ci-e2e-test/splitter-graph-a93f3-c7fd4dc8d-r2jt5" Apr 23 18:23:26.328392 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:26.328367 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fca9923-b6f0-43a8-8db9-469992c798e0-openshift-service-ca-bundle\") pod \"splitter-graph-a93f3-c7fd4dc8d-r2jt5\" (UID: \"0fca9923-b6f0-43a8-8db9-469992c798e0\") " pod="kserve-ci-e2e-test/splitter-graph-a93f3-c7fd4dc8d-r2jt5" Apr 23 18:23:26.330186 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:26.330167 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0fca9923-b6f0-43a8-8db9-469992c798e0-proxy-tls\") pod \"splitter-graph-a93f3-c7fd4dc8d-r2jt5\" (UID: \"0fca9923-b6f0-43a8-8db9-469992c798e0\") " pod="kserve-ci-e2e-test/splitter-graph-a93f3-c7fd4dc8d-r2jt5" Apr 23 18:23:26.378271 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:26.378234 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-a93f3-c7fd4dc8d-r2jt5" Apr 23 18:23:26.495800 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:26.495769 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-a93f3-c7fd4dc8d-r2jt5"] Apr 23 18:23:26.499205 ip-10-0-133-178 kubenswrapper[2572]: W0423 18:23:26.499172 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fca9923_b6f0_43a8_8db9_469992c798e0.slice/crio-314a2a9e3b80c2be282d7b76b0fcc7ec1c382fb82eeeb860ad34060157893608 WatchSource:0}: Error finding container 314a2a9e3b80c2be282d7b76b0fcc7ec1c382fb82eeeb860ad34060157893608: Status 404 returned error can't find the container with id 314a2a9e3b80c2be282d7b76b0fcc7ec1c382fb82eeeb860ad34060157893608 Apr 23 18:23:26.500878 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:26.500860 2572 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 18:23:27.390083 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:27.390047 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-a93f3-c7fd4dc8d-r2jt5" event={"ID":"0fca9923-b6f0-43a8-8db9-469992c798e0","Type":"ContainerStarted","Data":"0f11a0ef5c7a9d9fd1d56a4bb27718c64fa01aeb55d264ebe504bae7fe27e6ad"} Apr 23 18:23:27.390083 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:27.390084 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-a93f3-c7fd4dc8d-r2jt5" event={"ID":"0fca9923-b6f0-43a8-8db9-469992c798e0","Type":"ContainerStarted","Data":"314a2a9e3b80c2be282d7b76b0fcc7ec1c382fb82eeeb860ad34060157893608"} Apr 23 18:23:27.390527 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:27.390110 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/splitter-graph-a93f3-c7fd4dc8d-r2jt5" Apr 23 18:23:27.406025 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:27.405977 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/splitter-graph-a93f3-c7fd4dc8d-r2jt5" podStartSLOduration=1.405961549 podStartE2EDuration="1.405961549s" podCreationTimestamp="2026-04-23 18:23:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:23:27.404963256 +0000 UTC m=+1882.599867983" watchObservedRunningTime="2026-04-23 18:23:27.405961549 +0000 UTC m=+1882.600866275" Apr 23 18:23:29.056613 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:29.056574 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-e1fac-84bd497c8-5xwfd" podUID="4df6da3d-538a-4c59-8197-289f88b09488" containerName="sequence-graph-e1fac" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:23:29.057079 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:29.056727 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/sequence-graph-e1fac-84bd497c8-5xwfd" Apr 23 18:23:33.398544 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:33.398512 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/splitter-graph-a93f3-c7fd4dc8d-r2jt5" Apr 23 18:23:34.057361 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:34.057322 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-e1fac-84bd497c8-5xwfd" podUID="4df6da3d-538a-4c59-8197-289f88b09488" containerName="sequence-graph-e1fac" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:23:36.146827 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:36.146751 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-a93f3-c7fd4dc8d-r2jt5"] Apr 23 18:23:36.147170 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:36.146979 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/splitter-graph-a93f3-c7fd4dc8d-r2jt5" podUID="0fca9923-b6f0-43a8-8db9-469992c798e0" containerName="splitter-graph-a93f3" containerID="cri-o://0f11a0ef5c7a9d9fd1d56a4bb27718c64fa01aeb55d264ebe504bae7fe27e6ad" gracePeriod=30 Apr 23 18:23:38.397669 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:38.397625 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-a93f3-c7fd4dc8d-r2jt5" podUID="0fca9923-b6f0-43a8-8db9-469992c798e0" containerName="splitter-graph-a93f3" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:23:39.057224 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:39.057181 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-e1fac-84bd497c8-5xwfd" podUID="4df6da3d-538a-4c59-8197-289f88b09488" containerName="sequence-graph-e1fac" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:23:43.397510 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:43.397469 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-a93f3-c7fd4dc8d-r2jt5" podUID="0fca9923-b6f0-43a8-8db9-469992c798e0" containerName="splitter-graph-a93f3" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:23:44.057082 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:44.057044 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-e1fac-84bd497c8-5xwfd" podUID="4df6da3d-538a-4c59-8197-289f88b09488" containerName="sequence-graph-e1fac" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:23:45.810059 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:45.810036 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-e1fac-84bd497c8-5xwfd" Apr 23 18:23:45.975542 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:45.975516 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df6da3d-538a-4c59-8197-289f88b09488-proxy-tls\") pod \"4df6da3d-538a-4c59-8197-289f88b09488\" (UID: \"4df6da3d-538a-4c59-8197-289f88b09488\") " Apr 23 18:23:45.975701 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:45.975595 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4df6da3d-538a-4c59-8197-289f88b09488-openshift-service-ca-bundle\") pod \"4df6da3d-538a-4c59-8197-289f88b09488\" (UID: \"4df6da3d-538a-4c59-8197-289f88b09488\") " Apr 23 18:23:45.975948 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:45.975914 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4df6da3d-538a-4c59-8197-289f88b09488-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "4df6da3d-538a-4c59-8197-289f88b09488" (UID: "4df6da3d-538a-4c59-8197-289f88b09488"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:23:45.977560 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:45.977541 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4df6da3d-538a-4c59-8197-289f88b09488-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "4df6da3d-538a-4c59-8197-289f88b09488" (UID: "4df6da3d-538a-4c59-8197-289f88b09488"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:23:46.076720 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:46.076668 2572 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df6da3d-538a-4c59-8197-289f88b09488-proxy-tls\") on node \"ip-10-0-133-178.ec2.internal\" DevicePath \"\"" Apr 23 18:23:46.076720 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:46.076715 2572 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4df6da3d-538a-4c59-8197-289f88b09488-openshift-service-ca-bundle\") on node \"ip-10-0-133-178.ec2.internal\" DevicePath \"\"" Apr 23 18:23:46.445631 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:46.445537 2572 generic.go:358] "Generic (PLEG): container finished" podID="4df6da3d-538a-4c59-8197-289f88b09488" containerID="d0c1fed16000f28890c531232c20cfe39c696558adbd86fdc23639801d97b6a2" exitCode=0 Apr 23 18:23:46.445631 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:46.445600 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-e1fac-84bd497c8-5xwfd" Apr 23 18:23:46.445831 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:46.445600 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-e1fac-84bd497c8-5xwfd" event={"ID":"4df6da3d-538a-4c59-8197-289f88b09488","Type":"ContainerDied","Data":"d0c1fed16000f28890c531232c20cfe39c696558adbd86fdc23639801d97b6a2"} Apr 23 18:23:46.445831 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:46.445698 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-e1fac-84bd497c8-5xwfd" event={"ID":"4df6da3d-538a-4c59-8197-289f88b09488","Type":"ContainerDied","Data":"9bdf09efc6915b529264e653f4c0ad020b783ec7e0b821f1a166fb1ab389cce3"} Apr 23 18:23:46.445831 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:46.445712 2572 scope.go:117] "RemoveContainer" containerID="d0c1fed16000f28890c531232c20cfe39c696558adbd86fdc23639801d97b6a2" Apr 23 18:23:46.453249 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:46.453226 2572 scope.go:117] "RemoveContainer" containerID="d0c1fed16000f28890c531232c20cfe39c696558adbd86fdc23639801d97b6a2" Apr 23 18:23:46.453521 ip-10-0-133-178 kubenswrapper[2572]: E0423 18:23:46.453494 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0c1fed16000f28890c531232c20cfe39c696558adbd86fdc23639801d97b6a2\": container with ID starting with d0c1fed16000f28890c531232c20cfe39c696558adbd86fdc23639801d97b6a2 not found: ID does not exist" containerID="d0c1fed16000f28890c531232c20cfe39c696558adbd86fdc23639801d97b6a2" Apr 23 18:23:46.453614 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:46.453527 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0c1fed16000f28890c531232c20cfe39c696558adbd86fdc23639801d97b6a2"} err="failed to get container status \"d0c1fed16000f28890c531232c20cfe39c696558adbd86fdc23639801d97b6a2\": rpc error: code = NotFound desc = could not find container \"d0c1fed16000f28890c531232c20cfe39c696558adbd86fdc23639801d97b6a2\": container with ID starting with d0c1fed16000f28890c531232c20cfe39c696558adbd86fdc23639801d97b6a2 not found: ID does not exist" Apr 23 18:23:46.464889 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:46.464869 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-e1fac-84bd497c8-5xwfd"] Apr 23 18:23:46.470630 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:46.470609 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-e1fac-84bd497c8-5xwfd"] Apr 23 18:23:47.417996 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:47.417962 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4df6da3d-538a-4c59-8197-289f88b09488" path="/var/lib/kubelet/pods/4df6da3d-538a-4c59-8197-289f88b09488/volumes" Apr 23 18:23:48.397518 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:48.397471 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-a93f3-c7fd4dc8d-r2jt5" podUID="0fca9923-b6f0-43a8-8db9-469992c798e0" containerName="splitter-graph-a93f3" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:23:48.397693 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:48.397610 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/splitter-graph-a93f3-c7fd4dc8d-r2jt5" Apr 23 18:23:53.397125 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:53.397081 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-a93f3-c7fd4dc8d-r2jt5" podUID="0fca9923-b6f0-43a8-8db9-469992c798e0" containerName="splitter-graph-a93f3" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:23:58.397342 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:23:58.397297 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-a93f3-c7fd4dc8d-r2jt5" podUID="0fca9923-b6f0-43a8-8db9-469992c798e0" containerName="splitter-graph-a93f3" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:24:03.397493 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:03.397454 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-a93f3-c7fd4dc8d-r2jt5" podUID="0fca9923-b6f0-43a8-8db9-469992c798e0" containerName="splitter-graph-a93f3" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:24:06.279625 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:06.279600 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-a93f3-c7fd4dc8d-r2jt5" Apr 23 18:24:06.313220 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:06.313196 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0fca9923-b6f0-43a8-8db9-469992c798e0-proxy-tls\") pod \"0fca9923-b6f0-43a8-8db9-469992c798e0\" (UID: \"0fca9923-b6f0-43a8-8db9-469992c798e0\") " Apr 23 18:24:06.313349 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:06.313245 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fca9923-b6f0-43a8-8db9-469992c798e0-openshift-service-ca-bundle\") pod \"0fca9923-b6f0-43a8-8db9-469992c798e0\" (UID: \"0fca9923-b6f0-43a8-8db9-469992c798e0\") " Apr 23 18:24:06.313700 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:06.313663 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fca9923-b6f0-43a8-8db9-469992c798e0-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "0fca9923-b6f0-43a8-8db9-469992c798e0" (UID: "0fca9923-b6f0-43a8-8db9-469992c798e0"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:24:06.315178 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:06.315159 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fca9923-b6f0-43a8-8db9-469992c798e0-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0fca9923-b6f0-43a8-8db9-469992c798e0" (UID: "0fca9923-b6f0-43a8-8db9-469992c798e0"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:24:06.414343 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:06.414264 2572 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0fca9923-b6f0-43a8-8db9-469992c798e0-proxy-tls\") on node \"ip-10-0-133-178.ec2.internal\" DevicePath \"\"" Apr 23 18:24:06.414343 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:06.414295 2572 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fca9923-b6f0-43a8-8db9-469992c798e0-openshift-service-ca-bundle\") on node \"ip-10-0-133-178.ec2.internal\" DevicePath \"\"" Apr 23 18:24:06.500203 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:06.500171 2572 generic.go:358] "Generic (PLEG): container finished" podID="0fca9923-b6f0-43a8-8db9-469992c798e0" containerID="0f11a0ef5c7a9d9fd1d56a4bb27718c64fa01aeb55d264ebe504bae7fe27e6ad" exitCode=0 Apr 23 18:24:06.500343 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:06.500235 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-a93f3-c7fd4dc8d-r2jt5" Apr 23 18:24:06.500343 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:06.500243 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-a93f3-c7fd4dc8d-r2jt5" event={"ID":"0fca9923-b6f0-43a8-8db9-469992c798e0","Type":"ContainerDied","Data":"0f11a0ef5c7a9d9fd1d56a4bb27718c64fa01aeb55d264ebe504bae7fe27e6ad"} Apr 23 18:24:06.500343 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:06.500270 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-a93f3-c7fd4dc8d-r2jt5" event={"ID":"0fca9923-b6f0-43a8-8db9-469992c798e0","Type":"ContainerDied","Data":"314a2a9e3b80c2be282d7b76b0fcc7ec1c382fb82eeeb860ad34060157893608"} Apr 23 18:24:06.500343 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:06.500287 2572 scope.go:117] "RemoveContainer" containerID="0f11a0ef5c7a9d9fd1d56a4bb27718c64fa01aeb55d264ebe504bae7fe27e6ad" Apr 23 18:24:06.510223 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:06.509518 2572 scope.go:117] "RemoveContainer" containerID="0f11a0ef5c7a9d9fd1d56a4bb27718c64fa01aeb55d264ebe504bae7fe27e6ad" Apr 23 18:24:06.510303 ip-10-0-133-178 kubenswrapper[2572]: E0423 18:24:06.510277 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f11a0ef5c7a9d9fd1d56a4bb27718c64fa01aeb55d264ebe504bae7fe27e6ad\": container with ID starting with 0f11a0ef5c7a9d9fd1d56a4bb27718c64fa01aeb55d264ebe504bae7fe27e6ad not found: ID does not exist" containerID="0f11a0ef5c7a9d9fd1d56a4bb27718c64fa01aeb55d264ebe504bae7fe27e6ad" Apr 23 18:24:06.510350 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:06.510313 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f11a0ef5c7a9d9fd1d56a4bb27718c64fa01aeb55d264ebe504bae7fe27e6ad"} err="failed to get container status \"0f11a0ef5c7a9d9fd1d56a4bb27718c64fa01aeb55d264ebe504bae7fe27e6ad\": rpc error: code = NotFound desc = could not find container \"0f11a0ef5c7a9d9fd1d56a4bb27718c64fa01aeb55d264ebe504bae7fe27e6ad\": container with ID starting with 0f11a0ef5c7a9d9fd1d56a4bb27718c64fa01aeb55d264ebe504bae7fe27e6ad not found: ID does not exist" Apr 23 18:24:06.520966 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:06.520934 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-a93f3-c7fd4dc8d-r2jt5"] Apr 23 18:24:06.525325 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:06.525305 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-a93f3-c7fd4dc8d-r2jt5"] Apr 23 18:24:07.418246 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:07.418215 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fca9923-b6f0-43a8-8db9-469992c798e0" path="/var/lib/kubelet/pods/0fca9923-b6f0-43a8-8db9-469992c798e0/volumes" Apr 23 18:24:15.891431 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:15.890851 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/switch-graph-87aee-db68b6cb9-f6x9v"] Apr 23 18:24:15.892741 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:15.891905 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0fca9923-b6f0-43a8-8db9-469992c798e0" containerName="splitter-graph-a93f3" Apr 23 18:24:15.892741 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:15.891933 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fca9923-b6f0-43a8-8db9-469992c798e0" containerName="splitter-graph-a93f3" Apr 23 18:24:15.892741 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:15.891967 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4df6da3d-538a-4c59-8197-289f88b09488" containerName="sequence-graph-e1fac" Apr 23 18:24:15.892741 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:15.891976 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="4df6da3d-538a-4c59-8197-289f88b09488" containerName="sequence-graph-e1fac" Apr 23 18:24:15.892741 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:15.892132 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="0fca9923-b6f0-43a8-8db9-469992c798e0" containerName="splitter-graph-a93f3" Apr 23 18:24:15.892741 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:15.892150 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="4df6da3d-538a-4c59-8197-289f88b09488" containerName="sequence-graph-e1fac" Apr 23 18:24:15.897208 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:15.897182 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-87aee-db68b6cb9-f6x9v" Apr 23 18:24:15.899334 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:15.899314 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"switch-graph-87aee-serving-cert\"" Apr 23 18:24:15.899441 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:15.899337 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"openshift-service-ca.crt\"" Apr 23 18:24:15.899441 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:15.899377 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"default-dockercfg-jv9tx\"" Apr 23 18:24:15.899441 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:15.899315 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"switch-graph-87aee-kube-rbac-proxy-sar-config\"" Apr 23 18:24:15.901783 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:15.901762 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/switch-graph-87aee-db68b6cb9-f6x9v"] Apr 23 18:24:15.983131 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:15.983092 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8d08ae5c-7408-451d-b12c-4c9ec3720cd0-proxy-tls\") pod \"switch-graph-87aee-db68b6cb9-f6x9v\" (UID: \"8d08ae5c-7408-451d-b12c-4c9ec3720cd0\") " pod="kserve-ci-e2e-test/switch-graph-87aee-db68b6cb9-f6x9v" Apr 23 18:24:15.983131 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:15.983135 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d08ae5c-7408-451d-b12c-4c9ec3720cd0-openshift-service-ca-bundle\") pod \"switch-graph-87aee-db68b6cb9-f6x9v\" (UID: \"8d08ae5c-7408-451d-b12c-4c9ec3720cd0\") " pod="kserve-ci-e2e-test/switch-graph-87aee-db68b6cb9-f6x9v" Apr 23 18:24:16.084468 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:16.084396 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8d08ae5c-7408-451d-b12c-4c9ec3720cd0-proxy-tls\") pod \"switch-graph-87aee-db68b6cb9-f6x9v\" (UID: \"8d08ae5c-7408-451d-b12c-4c9ec3720cd0\") " pod="kserve-ci-e2e-test/switch-graph-87aee-db68b6cb9-f6x9v" Apr 23 18:24:16.084625 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:16.084476 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d08ae5c-7408-451d-b12c-4c9ec3720cd0-openshift-service-ca-bundle\") pod \"switch-graph-87aee-db68b6cb9-f6x9v\" (UID: \"8d08ae5c-7408-451d-b12c-4c9ec3720cd0\") " pod="kserve-ci-e2e-test/switch-graph-87aee-db68b6cb9-f6x9v" Apr 23 18:24:16.084625 ip-10-0-133-178 kubenswrapper[2572]: E0423 18:24:16.084555 2572 secret.go:189] Couldn't get secret kserve-ci-e2e-test/switch-graph-87aee-serving-cert: secret "switch-graph-87aee-serving-cert" not found Apr 23 18:24:16.084730 ip-10-0-133-178 kubenswrapper[2572]: E0423 18:24:16.084635 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d08ae5c-7408-451d-b12c-4c9ec3720cd0-proxy-tls podName:8d08ae5c-7408-451d-b12c-4c9ec3720cd0 nodeName:}" failed. No retries permitted until 2026-04-23 18:24:16.584611787 +0000 UTC m=+1931.779516492 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/8d08ae5c-7408-451d-b12c-4c9ec3720cd0-proxy-tls") pod "switch-graph-87aee-db68b6cb9-f6x9v" (UID: "8d08ae5c-7408-451d-b12c-4c9ec3720cd0") : secret "switch-graph-87aee-serving-cert" not found Apr 23 18:24:16.085056 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:16.085036 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d08ae5c-7408-451d-b12c-4c9ec3720cd0-openshift-service-ca-bundle\") pod \"switch-graph-87aee-db68b6cb9-f6x9v\" (UID: \"8d08ae5c-7408-451d-b12c-4c9ec3720cd0\") " pod="kserve-ci-e2e-test/switch-graph-87aee-db68b6cb9-f6x9v" Apr 23 18:24:16.589367 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:16.589330 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8d08ae5c-7408-451d-b12c-4c9ec3720cd0-proxy-tls\") pod \"switch-graph-87aee-db68b6cb9-f6x9v\" (UID: \"8d08ae5c-7408-451d-b12c-4c9ec3720cd0\") " pod="kserve-ci-e2e-test/switch-graph-87aee-db68b6cb9-f6x9v" Apr 23 18:24:16.591682 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:16.591651 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8d08ae5c-7408-451d-b12c-4c9ec3720cd0-proxy-tls\") pod \"switch-graph-87aee-db68b6cb9-f6x9v\" (UID: \"8d08ae5c-7408-451d-b12c-4c9ec3720cd0\") " pod="kserve-ci-e2e-test/switch-graph-87aee-db68b6cb9-f6x9v" Apr 23 18:24:16.808772 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:16.808739 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-87aee-db68b6cb9-f6x9v" Apr 23 18:24:16.923587 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:16.923544 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/switch-graph-87aee-db68b6cb9-f6x9v"] Apr 23 18:24:16.927437 ip-10-0-133-178 kubenswrapper[2572]: W0423 18:24:16.927386 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d08ae5c_7408_451d_b12c_4c9ec3720cd0.slice/crio-e60647a3d525a223bfb65685d541fd6f7a42fb223bfa61553ac5eccdcb5d499e WatchSource:0}: Error finding container e60647a3d525a223bfb65685d541fd6f7a42fb223bfa61553ac5eccdcb5d499e: Status 404 returned error can't find the container with id e60647a3d525a223bfb65685d541fd6f7a42fb223bfa61553ac5eccdcb5d499e Apr 23 18:24:17.531348 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:17.531317 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-87aee-db68b6cb9-f6x9v" event={"ID":"8d08ae5c-7408-451d-b12c-4c9ec3720cd0","Type":"ContainerStarted","Data":"9ae59a61a178684decec9349accd8adcfb149f15a7628b3385be48aa558acce4"} Apr 23 18:24:17.531348 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:17.531356 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-87aee-db68b6cb9-f6x9v" event={"ID":"8d08ae5c-7408-451d-b12c-4c9ec3720cd0","Type":"ContainerStarted","Data":"e60647a3d525a223bfb65685d541fd6f7a42fb223bfa61553ac5eccdcb5d499e"} Apr 23 18:24:17.531628 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:17.531436 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/switch-graph-87aee-db68b6cb9-f6x9v" Apr 23 18:24:17.547350 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:17.547305 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/switch-graph-87aee-db68b6cb9-f6x9v" podStartSLOduration=2.547291768 podStartE2EDuration="2.547291768s" podCreationTimestamp="2026-04-23 18:24:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:24:17.546165654 +0000 UTC m=+1932.741070381" watchObservedRunningTime="2026-04-23 18:24:17.547291768 +0000 UTC m=+1932.742196494" Apr 23 18:24:23.540539 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:23.540511 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/switch-graph-87aee-db68b6cb9-f6x9v" Apr 23 18:24:36.353801 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:36.353765 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/splitter-graph-7efef-547d76f799-6q2j2"] Apr 23 18:24:36.356723 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:36.356707 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-7efef-547d76f799-6q2j2" Apr 23 18:24:36.358753 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:36.358729 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"splitter-graph-7efef-serving-cert\"" Apr 23 18:24:36.358896 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:36.358731 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"splitter-graph-7efef-kube-rbac-proxy-sar-config\"" Apr 23 18:24:36.367262 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:36.367237 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-7efef-547d76f799-6q2j2"] Apr 23 18:24:36.437214 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:36.437182 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b9a7dfa6-511e-4872-ad2d-92f0ab085855-proxy-tls\") pod \"splitter-graph-7efef-547d76f799-6q2j2\" (UID: \"b9a7dfa6-511e-4872-ad2d-92f0ab085855\") " pod="kserve-ci-e2e-test/splitter-graph-7efef-547d76f799-6q2j2" Apr 23 18:24:36.437370 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:36.437220 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9a7dfa6-511e-4872-ad2d-92f0ab085855-openshift-service-ca-bundle\") pod \"splitter-graph-7efef-547d76f799-6q2j2\" (UID: \"b9a7dfa6-511e-4872-ad2d-92f0ab085855\") " pod="kserve-ci-e2e-test/splitter-graph-7efef-547d76f799-6q2j2" Apr 23 18:24:36.538275 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:36.538219 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b9a7dfa6-511e-4872-ad2d-92f0ab085855-proxy-tls\") pod \"splitter-graph-7efef-547d76f799-6q2j2\" (UID: \"b9a7dfa6-511e-4872-ad2d-92f0ab085855\") " pod="kserve-ci-e2e-test/splitter-graph-7efef-547d76f799-6q2j2" Apr 23 18:24:36.538275 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:36.538274 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9a7dfa6-511e-4872-ad2d-92f0ab085855-openshift-service-ca-bundle\") pod \"splitter-graph-7efef-547d76f799-6q2j2\" (UID: \"b9a7dfa6-511e-4872-ad2d-92f0ab085855\") " pod="kserve-ci-e2e-test/splitter-graph-7efef-547d76f799-6q2j2" Apr 23 18:24:36.538512 ip-10-0-133-178 kubenswrapper[2572]: E0423 18:24:36.538370 2572 secret.go:189] Couldn't get secret kserve-ci-e2e-test/splitter-graph-7efef-serving-cert: secret "splitter-graph-7efef-serving-cert" not found Apr 23 18:24:36.538512 ip-10-0-133-178 kubenswrapper[2572]: E0423 18:24:36.538453 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b9a7dfa6-511e-4872-ad2d-92f0ab085855-proxy-tls podName:b9a7dfa6-511e-4872-ad2d-92f0ab085855 nodeName:}" failed. No retries permitted until 2026-04-23 18:24:37.038432951 +0000 UTC m=+1952.233337680 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/b9a7dfa6-511e-4872-ad2d-92f0ab085855-proxy-tls") pod "splitter-graph-7efef-547d76f799-6q2j2" (UID: "b9a7dfa6-511e-4872-ad2d-92f0ab085855") : secret "splitter-graph-7efef-serving-cert" not found Apr 23 18:24:36.538874 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:36.538857 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9a7dfa6-511e-4872-ad2d-92f0ab085855-openshift-service-ca-bundle\") pod \"splitter-graph-7efef-547d76f799-6q2j2\" (UID: \"b9a7dfa6-511e-4872-ad2d-92f0ab085855\") " pod="kserve-ci-e2e-test/splitter-graph-7efef-547d76f799-6q2j2" Apr 23 18:24:37.043616 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:37.043575 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b9a7dfa6-511e-4872-ad2d-92f0ab085855-proxy-tls\") pod \"splitter-graph-7efef-547d76f799-6q2j2\" (UID: \"b9a7dfa6-511e-4872-ad2d-92f0ab085855\") " pod="kserve-ci-e2e-test/splitter-graph-7efef-547d76f799-6q2j2" Apr 23 18:24:37.045963 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:37.045943 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b9a7dfa6-511e-4872-ad2d-92f0ab085855-proxy-tls\") pod \"splitter-graph-7efef-547d76f799-6q2j2\" (UID: \"b9a7dfa6-511e-4872-ad2d-92f0ab085855\") " pod="kserve-ci-e2e-test/splitter-graph-7efef-547d76f799-6q2j2" Apr 23 18:24:37.266218 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:37.266185 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-7efef-547d76f799-6q2j2" Apr 23 18:24:37.382591 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:37.382564 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-7efef-547d76f799-6q2j2"] Apr 23 18:24:37.384916 ip-10-0-133-178 kubenswrapper[2572]: W0423 18:24:37.384885 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9a7dfa6_511e_4872_ad2d_92f0ab085855.slice/crio-4ef7f6a0454c20be15e2213ab470c61bc327ca81d209768d98b37b83cae8801e WatchSource:0}: Error finding container 4ef7f6a0454c20be15e2213ab470c61bc327ca81d209768d98b37b83cae8801e: Status 404 returned error can't find the container with id 4ef7f6a0454c20be15e2213ab470c61bc327ca81d209768d98b37b83cae8801e Apr 23 18:24:37.586750 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:37.586664 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-7efef-547d76f799-6q2j2" event={"ID":"b9a7dfa6-511e-4872-ad2d-92f0ab085855","Type":"ContainerStarted","Data":"0c23b879e794613f21a9bbfd8b271f1b58ca16e3fe057c264c574769359f746e"} Apr 23 18:24:37.586750 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:37.586700 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-7efef-547d76f799-6q2j2" event={"ID":"b9a7dfa6-511e-4872-ad2d-92f0ab085855","Type":"ContainerStarted","Data":"4ef7f6a0454c20be15e2213ab470c61bc327ca81d209768d98b37b83cae8801e"} Apr 23 18:24:37.586750 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:37.586724 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/splitter-graph-7efef-547d76f799-6q2j2" Apr 23 18:24:37.602282 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:37.602244 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/splitter-graph-7efef-547d76f799-6q2j2" podStartSLOduration=1.6022303500000001 podStartE2EDuration="1.60223035s" podCreationTimestamp="2026-04-23 18:24:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:24:37.601919349 +0000 UTC m=+1952.796824076" watchObservedRunningTime="2026-04-23 18:24:37.60223035 +0000 UTC m=+1952.797135076" Apr 23 18:24:43.595869 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:24:43.595834 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/splitter-graph-7efef-547d76f799-6q2j2" Apr 23 18:27:05.439532 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:27:05.439500 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9pcc_3c2da17f-0591-4850-9fa2-fde2a8c1a8d5/ovn-acl-logging/0.log" Apr 23 18:27:05.443904 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:27:05.443881 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_2e27a1d033408744b4b8c34c52f01b43/kube-rbac-proxy-crio/4.log" Apr 23 18:27:05.447482 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:27:05.447463 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9pcc_3c2da17f-0591-4850-9fa2-fde2a8c1a8d5/ovn-acl-logging/0.log" Apr 23 18:27:05.451530 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:27:05.451512 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_2e27a1d033408744b4b8c34c52f01b43/kube-rbac-proxy-crio/4.log" Apr 23 18:32:05.457047 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:32:05.456944 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9pcc_3c2da17f-0591-4850-9fa2-fde2a8c1a8d5/ovn-acl-logging/0.log" Apr 23 18:32:05.461265 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:32:05.461241 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_2e27a1d033408744b4b8c34c52f01b43/kube-rbac-proxy-crio/4.log" Apr 23 18:32:05.465578 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:32:05.465563 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9pcc_3c2da17f-0591-4850-9fa2-fde2a8c1a8d5/ovn-acl-logging/0.log" Apr 23 18:32:05.469577 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:32:05.469561 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_2e27a1d033408744b4b8c34c52f01b43/kube-rbac-proxy-crio/4.log" Apr 23 18:32:51.126961 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:32:51.126925 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-7efef-547d76f799-6q2j2"] Apr 23 18:32:51.127523 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:32:51.127129 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/splitter-graph-7efef-547d76f799-6q2j2" podUID="b9a7dfa6-511e-4872-ad2d-92f0ab085855" containerName="splitter-graph-7efef" containerID="cri-o://0c23b879e794613f21a9bbfd8b271f1b58ca16e3fe057c264c574769359f746e" gracePeriod=30 Apr 23 18:32:53.593737 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:32:53.593695 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-7efef-547d76f799-6q2j2" podUID="b9a7dfa6-511e-4872-ad2d-92f0ab085855" containerName="splitter-graph-7efef" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:32:58.593659 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:32:58.593612 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-7efef-547d76f799-6q2j2" podUID="b9a7dfa6-511e-4872-ad2d-92f0ab085855" containerName="splitter-graph-7efef" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:33:03.593451 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:33:03.593393 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-7efef-547d76f799-6q2j2" podUID="b9a7dfa6-511e-4872-ad2d-92f0ab085855" containerName="splitter-graph-7efef" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:33:03.593898 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:33:03.593555 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/splitter-graph-7efef-547d76f799-6q2j2" Apr 23 18:33:08.592997 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:33:08.592950 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-7efef-547d76f799-6q2j2" podUID="b9a7dfa6-511e-4872-ad2d-92f0ab085855" containerName="splitter-graph-7efef" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:33:13.593884 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:33:13.593842 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-7efef-547d76f799-6q2j2" podUID="b9a7dfa6-511e-4872-ad2d-92f0ab085855" containerName="splitter-graph-7efef" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:33:18.593687 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:33:18.593646 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-7efef-547d76f799-6q2j2" podUID="b9a7dfa6-511e-4872-ad2d-92f0ab085855" containerName="splitter-graph-7efef" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:33:21.258319 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:33:21.258297 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-7efef-547d76f799-6q2j2" Apr 23 18:33:21.289852 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:33:21.289824 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b9a7dfa6-511e-4872-ad2d-92f0ab085855-proxy-tls\") pod \"b9a7dfa6-511e-4872-ad2d-92f0ab085855\" (UID: \"b9a7dfa6-511e-4872-ad2d-92f0ab085855\") " Apr 23 18:33:21.289983 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:33:21.289954 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9a7dfa6-511e-4872-ad2d-92f0ab085855-openshift-service-ca-bundle\") pod \"b9a7dfa6-511e-4872-ad2d-92f0ab085855\" (UID: \"b9a7dfa6-511e-4872-ad2d-92f0ab085855\") " Apr 23 18:33:21.290291 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:33:21.290261 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9a7dfa6-511e-4872-ad2d-92f0ab085855-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "b9a7dfa6-511e-4872-ad2d-92f0ab085855" (UID: "b9a7dfa6-511e-4872-ad2d-92f0ab085855"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:33:21.291799 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:33:21.291773 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9a7dfa6-511e-4872-ad2d-92f0ab085855-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b9a7dfa6-511e-4872-ad2d-92f0ab085855" (UID: "b9a7dfa6-511e-4872-ad2d-92f0ab085855"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:33:21.390472 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:33:21.390354 2572 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9a7dfa6-511e-4872-ad2d-92f0ab085855-openshift-service-ca-bundle\") on node \"ip-10-0-133-178.ec2.internal\" DevicePath \"\"" Apr 23 18:33:21.390472 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:33:21.390388 2572 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b9a7dfa6-511e-4872-ad2d-92f0ab085855-proxy-tls\") on node \"ip-10-0-133-178.ec2.internal\" DevicePath \"\"" Apr 23 18:33:21.982603 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:33:21.982569 2572 generic.go:358] "Generic (PLEG): container finished" podID="b9a7dfa6-511e-4872-ad2d-92f0ab085855" containerID="0c23b879e794613f21a9bbfd8b271f1b58ca16e3fe057c264c574769359f746e" exitCode=0 Apr 23 18:33:21.982803 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:33:21.982630 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-7efef-547d76f799-6q2j2" Apr 23 18:33:21.982803 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:33:21.982659 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-7efef-547d76f799-6q2j2" event={"ID":"b9a7dfa6-511e-4872-ad2d-92f0ab085855","Type":"ContainerDied","Data":"0c23b879e794613f21a9bbfd8b271f1b58ca16e3fe057c264c574769359f746e"} Apr 23 18:33:21.982803 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:33:21.982703 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-7efef-547d76f799-6q2j2" event={"ID":"b9a7dfa6-511e-4872-ad2d-92f0ab085855","Type":"ContainerDied","Data":"4ef7f6a0454c20be15e2213ab470c61bc327ca81d209768d98b37b83cae8801e"} Apr 23 18:33:21.982803 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:33:21.982725 2572 scope.go:117] "RemoveContainer" containerID="0c23b879e794613f21a9bbfd8b271f1b58ca16e3fe057c264c574769359f746e" Apr 23 18:33:21.990260 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:33:21.990243 2572 scope.go:117] "RemoveContainer" containerID="0c23b879e794613f21a9bbfd8b271f1b58ca16e3fe057c264c574769359f746e" Apr 23 18:33:21.990517 ip-10-0-133-178 kubenswrapper[2572]: E0423 18:33:21.990495 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c23b879e794613f21a9bbfd8b271f1b58ca16e3fe057c264c574769359f746e\": container with ID starting with 0c23b879e794613f21a9bbfd8b271f1b58ca16e3fe057c264c574769359f746e not found: ID does not exist" containerID="0c23b879e794613f21a9bbfd8b271f1b58ca16e3fe057c264c574769359f746e" Apr 23 18:33:21.990621 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:33:21.990523 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c23b879e794613f21a9bbfd8b271f1b58ca16e3fe057c264c574769359f746e"} err="failed to get container status \"0c23b879e794613f21a9bbfd8b271f1b58ca16e3fe057c264c574769359f746e\": rpc error: code = NotFound desc = could not find container \"0c23b879e794613f21a9bbfd8b271f1b58ca16e3fe057c264c574769359f746e\": container with ID starting with 0c23b879e794613f21a9bbfd8b271f1b58ca16e3fe057c264c574769359f746e not found: ID does not exist" Apr 23 18:33:21.998049 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:33:21.998030 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-7efef-547d76f799-6q2j2"] Apr 23 18:33:22.002335 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:33:22.002317 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-7efef-547d76f799-6q2j2"] Apr 23 18:33:23.418076 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:33:23.418045 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9a7dfa6-511e-4872-ad2d-92f0ab085855" path="/var/lib/kubelet/pods/b9a7dfa6-511e-4872-ad2d-92f0ab085855/volumes" Apr 23 18:37:05.473746 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:37:05.473712 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9pcc_3c2da17f-0591-4850-9fa2-fde2a8c1a8d5/ovn-acl-logging/0.log" Apr 23 18:37:05.478010 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:37:05.477989 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_2e27a1d033408744b4b8c34c52f01b43/kube-rbac-proxy-crio/4.log" Apr 23 18:37:05.482826 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:37:05.482810 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9pcc_3c2da17f-0591-4850-9fa2-fde2a8c1a8d5/ovn-acl-logging/0.log" Apr 23 18:37:05.487079 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:37:05.487063 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-ip-10-0-133-178.ec2.internal_2e27a1d033408744b4b8c34c52f01b43/kube-rbac-proxy-crio/4.log" Apr 23 18:40:35.055153 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:40:35.055116 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/switch-graph-87aee-db68b6cb9-f6x9v"] Apr 23 18:40:35.055736 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:40:35.055371 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/switch-graph-87aee-db68b6cb9-f6x9v" podUID="8d08ae5c-7408-451d-b12c-4c9ec3720cd0" containerName="switch-graph-87aee" containerID="cri-o://9ae59a61a178684decec9349accd8adcfb149f15a7628b3385be48aa558acce4" gracePeriod=30 Apr 23 18:40:38.538130 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:40:38.538086 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-87aee-db68b6cb9-f6x9v" podUID="8d08ae5c-7408-451d-b12c-4c9ec3720cd0" containerName="switch-graph-87aee" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:40:43.538070 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:40:43.538031 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-87aee-db68b6cb9-f6x9v" podUID="8d08ae5c-7408-451d-b12c-4c9ec3720cd0" containerName="switch-graph-87aee" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:40:48.538731 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:40:48.538691 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-87aee-db68b6cb9-f6x9v" podUID="8d08ae5c-7408-451d-b12c-4c9ec3720cd0" containerName="switch-graph-87aee" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:40:48.539154 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:40:48.538808 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/switch-graph-87aee-db68b6cb9-f6x9v" Apr 23 18:40:49.874940 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:40:49.874915 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-87aee-db68b6cb9-f6x9v_8d08ae5c-7408-451d-b12c-4c9ec3720cd0/switch-graph-87aee/0.log" Apr 23 18:40:50.724019 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:40:50.723991 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-87aee-db68b6cb9-f6x9v_8d08ae5c-7408-451d-b12c-4c9ec3720cd0/switch-graph-87aee/0.log" Apr 23 18:40:51.597224 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:40:51.597197 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-87aee-db68b6cb9-f6x9v_8d08ae5c-7408-451d-b12c-4c9ec3720cd0/switch-graph-87aee/0.log" Apr 23 18:40:52.396803 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:40:52.396769 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-87aee-db68b6cb9-f6x9v_8d08ae5c-7408-451d-b12c-4c9ec3720cd0/switch-graph-87aee/0.log" Apr 23 18:40:53.204099 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:40:53.204064 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-87aee-db68b6cb9-f6x9v_8d08ae5c-7408-451d-b12c-4c9ec3720cd0/switch-graph-87aee/0.log" Apr 23 18:40:53.537994 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:40:53.537946 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-87aee-db68b6cb9-f6x9v" podUID="8d08ae5c-7408-451d-b12c-4c9ec3720cd0" containerName="switch-graph-87aee" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:40:54.009558 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:40:54.009531 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-87aee-db68b6cb9-f6x9v_8d08ae5c-7408-451d-b12c-4c9ec3720cd0/switch-graph-87aee/0.log" Apr 23 18:40:54.818856 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:40:54.818824 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-87aee-db68b6cb9-f6x9v_8d08ae5c-7408-451d-b12c-4c9ec3720cd0/switch-graph-87aee/0.log" Apr 23 18:40:55.638172 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:40:55.638142 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-87aee-db68b6cb9-f6x9v_8d08ae5c-7408-451d-b12c-4c9ec3720cd0/switch-graph-87aee/0.log" Apr 23 18:40:56.459552 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:40:56.459521 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-87aee-db68b6cb9-f6x9v_8d08ae5c-7408-451d-b12c-4c9ec3720cd0/switch-graph-87aee/0.log" Apr 23 18:40:57.275034 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:40:57.275006 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-87aee-db68b6cb9-f6x9v_8d08ae5c-7408-451d-b12c-4c9ec3720cd0/switch-graph-87aee/0.log" Apr 23 18:40:58.104334 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:40:58.104306 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-87aee-db68b6cb9-f6x9v_8d08ae5c-7408-451d-b12c-4c9ec3720cd0/switch-graph-87aee/0.log" Apr 23 18:40:58.538050 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:40:58.538009 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-87aee-db68b6cb9-f6x9v" podUID="8d08ae5c-7408-451d-b12c-4c9ec3720cd0" containerName="switch-graph-87aee" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:40:58.913157 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:40:58.913082 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-87aee-db68b6cb9-f6x9v_8d08ae5c-7408-451d-b12c-4c9ec3720cd0/switch-graph-87aee/0.log" Apr 23 18:41:03.538755 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:03.538717 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-87aee-db68b6cb9-f6x9v" podUID="8d08ae5c-7408-451d-b12c-4c9ec3720cd0" containerName="switch-graph-87aee" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:41:05.195617 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:05.195590 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-87aee-db68b6cb9-f6x9v" Apr 23 18:41:05.214708 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:05.214677 2572 generic.go:358] "Generic (PLEG): container finished" podID="8d08ae5c-7408-451d-b12c-4c9ec3720cd0" containerID="9ae59a61a178684decec9349accd8adcfb149f15a7628b3385be48aa558acce4" exitCode=0 Apr 23 18:41:05.214855 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:05.214736 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-87aee-db68b6cb9-f6x9v" Apr 23 18:41:05.214855 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:05.214763 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-87aee-db68b6cb9-f6x9v" event={"ID":"8d08ae5c-7408-451d-b12c-4c9ec3720cd0","Type":"ContainerDied","Data":"9ae59a61a178684decec9349accd8adcfb149f15a7628b3385be48aa558acce4"} Apr 23 18:41:05.214855 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:05.214802 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-87aee-db68b6cb9-f6x9v" event={"ID":"8d08ae5c-7408-451d-b12c-4c9ec3720cd0","Type":"ContainerDied","Data":"e60647a3d525a223bfb65685d541fd6f7a42fb223bfa61553ac5eccdcb5d499e"} Apr 23 18:41:05.214855 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:05.214822 2572 scope.go:117] "RemoveContainer" containerID="9ae59a61a178684decec9349accd8adcfb149f15a7628b3385be48aa558acce4" Apr 23 18:41:05.226526 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:05.226482 2572 scope.go:117] "RemoveContainer" containerID="9ae59a61a178684decec9349accd8adcfb149f15a7628b3385be48aa558acce4" Apr 23 18:41:05.226749 ip-10-0-133-178 kubenswrapper[2572]: E0423 18:41:05.226730 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ae59a61a178684decec9349accd8adcfb149f15a7628b3385be48aa558acce4\": container with ID starting with 9ae59a61a178684decec9349accd8adcfb149f15a7628b3385be48aa558acce4 not found: ID does not exist" containerID="9ae59a61a178684decec9349accd8adcfb149f15a7628b3385be48aa558acce4" Apr 23 18:41:05.226826 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:05.226756 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ae59a61a178684decec9349accd8adcfb149f15a7628b3385be48aa558acce4"} err="failed to get container status \"9ae59a61a178684decec9349accd8adcfb149f15a7628b3385be48aa558acce4\": rpc error: code = NotFound desc = could not find container \"9ae59a61a178684decec9349accd8adcfb149f15a7628b3385be48aa558acce4\": container with ID starting with 9ae59a61a178684decec9349accd8adcfb149f15a7628b3385be48aa558acce4 not found: ID does not exist" Apr 23 18:41:05.351631 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:05.351550 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d08ae5c-7408-451d-b12c-4c9ec3720cd0-openshift-service-ca-bundle\") pod \"8d08ae5c-7408-451d-b12c-4c9ec3720cd0\" (UID: \"8d08ae5c-7408-451d-b12c-4c9ec3720cd0\") " Apr 23 18:41:05.351631 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:05.351601 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8d08ae5c-7408-451d-b12c-4c9ec3720cd0-proxy-tls\") pod \"8d08ae5c-7408-451d-b12c-4c9ec3720cd0\" (UID: \"8d08ae5c-7408-451d-b12c-4c9ec3720cd0\") " Apr 23 18:41:05.351875 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:05.351849 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d08ae5c-7408-451d-b12c-4c9ec3720cd0-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "8d08ae5c-7408-451d-b12c-4c9ec3720cd0" (UID: "8d08ae5c-7408-451d-b12c-4c9ec3720cd0"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:41:05.353582 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:05.353540 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d08ae5c-7408-451d-b12c-4c9ec3720cd0-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "8d08ae5c-7408-451d-b12c-4c9ec3720cd0" (UID: "8d08ae5c-7408-451d-b12c-4c9ec3720cd0"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:41:05.452771 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:05.452740 2572 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d08ae5c-7408-451d-b12c-4c9ec3720cd0-openshift-service-ca-bundle\") on node \"ip-10-0-133-178.ec2.internal\" DevicePath \"\"" Apr 23 18:41:05.452771 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:05.452771 2572 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8d08ae5c-7408-451d-b12c-4c9ec3720cd0-proxy-tls\") on node \"ip-10-0-133-178.ec2.internal\" DevicePath \"\"" Apr 23 18:41:05.530493 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:05.530470 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/switch-graph-87aee-db68b6cb9-f6x9v"] Apr 23 18:41:05.533858 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:05.533838 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/switch-graph-87aee-db68b6cb9-f6x9v"] Apr 23 18:41:06.038979 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:06.038955 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_global-pull-secret-syncer-n95c8_91e65909-6fc5-43ad-9403-4e762e15651f/global-pull-secret-syncer/0.log" Apr 23 18:41:06.114492 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:06.114458 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_konnectivity-agent-cwppv_26df88a3-a37a-4023-9f9f-cce91875523b/konnectivity-agent/0.log" Apr 23 18:41:06.193492 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:06.193453 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_kube-apiserver-proxy-ip-10-0-133-178.ec2.internal_422f53ca0cc951b394e5e5ec59460e85/haproxy/0.log" Apr 23 18:41:07.418261 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:07.418216 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d08ae5c-7408-451d-b12c-4c9ec3720cd0" path="/var/lib/kubelet/pods/8d08ae5c-7408-451d-b12c-4c9ec3720cd0/volumes" Apr 23 18:41:10.006513 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:10.006481 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-7dmlv_e178bb76-2a9b-4c0b-a47c-8be8d733a32a/node-exporter/0.log" Apr 23 18:41:10.034794 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:10.034763 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-7dmlv_e178bb76-2a9b-4c0b-a47c-8be8d733a32a/kube-rbac-proxy/0.log" Apr 23 18:41:10.068302 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:10.068272 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-7dmlv_e178bb76-2a9b-4c0b-a47c-8be8d733a32a/init-textfile/0.log" Apr 23 18:41:12.864648 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:12.864619 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-fbf6f99cd-qhqr9_0d442ca0-87d3-49df-afc8-a3323de055cd/console/0.log" Apr 23 18:41:12.964386 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:12.964355 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hqhtf/perf-node-gather-daemonset-jtggn"] Apr 23 18:41:12.964668 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:12.964655 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8d08ae5c-7408-451d-b12c-4c9ec3720cd0" containerName="switch-graph-87aee" Apr 23 18:41:12.964668 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:12.964670 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d08ae5c-7408-451d-b12c-4c9ec3720cd0" containerName="switch-graph-87aee" Apr 23 18:41:12.964754 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:12.964679 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b9a7dfa6-511e-4872-ad2d-92f0ab085855" containerName="splitter-graph-7efef" Apr 23 18:41:12.964754 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:12.964684 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9a7dfa6-511e-4872-ad2d-92f0ab085855" containerName="splitter-graph-7efef" Apr 23 18:41:12.964754 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:12.964728 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="8d08ae5c-7408-451d-b12c-4c9ec3720cd0" containerName="switch-graph-87aee" Apr 23 18:41:12.964754 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:12.964736 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="b9a7dfa6-511e-4872-ad2d-92f0ab085855" containerName="splitter-graph-7efef" Apr 23 18:41:12.969164 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:12.969133 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-jtggn" Apr 23 18:41:12.971271 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:12.971245 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-hqhtf\"/\"default-dockercfg-7jg68\"" Apr 23 18:41:12.971702 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:12.971687 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-hqhtf\"/\"kube-root-ca.crt\"" Apr 23 18:41:12.971924 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:12.971909 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-hqhtf\"/\"openshift-service-ca.crt\"" Apr 23 18:41:12.979931 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:12.979910 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hqhtf/perf-node-gather-daemonset-jtggn"] Apr 23 18:41:13.111064 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:13.111035 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/51005d83-09ad-4736-8139-d5e0df38c1ac-sys\") pod \"perf-node-gather-daemonset-jtggn\" (UID: \"51005d83-09ad-4736-8139-d5e0df38c1ac\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-jtggn" Apr 23 18:41:13.111232 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:13.111075 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp9xd\" (UniqueName: \"kubernetes.io/projected/51005d83-09ad-4736-8139-d5e0df38c1ac-kube-api-access-sp9xd\") pod \"perf-node-gather-daemonset-jtggn\" (UID: \"51005d83-09ad-4736-8139-d5e0df38c1ac\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-jtggn" Apr 23 18:41:13.111232 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:13.111097 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/51005d83-09ad-4736-8139-d5e0df38c1ac-proc\") pod \"perf-node-gather-daemonset-jtggn\" (UID: \"51005d83-09ad-4736-8139-d5e0df38c1ac\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-jtggn" Apr 23 18:41:13.111232 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:13.111220 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/51005d83-09ad-4736-8139-d5e0df38c1ac-podres\") pod \"perf-node-gather-daemonset-jtggn\" (UID: \"51005d83-09ad-4736-8139-d5e0df38c1ac\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-jtggn" Apr 23 18:41:13.111363 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:13.111249 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51005d83-09ad-4736-8139-d5e0df38c1ac-lib-modules\") pod \"perf-node-gather-daemonset-jtggn\" (UID: \"51005d83-09ad-4736-8139-d5e0df38c1ac\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-jtggn" Apr 23 18:41:13.211983 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:13.211948 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/51005d83-09ad-4736-8139-d5e0df38c1ac-sys\") pod \"perf-node-gather-daemonset-jtggn\" (UID: \"51005d83-09ad-4736-8139-d5e0df38c1ac\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-jtggn" Apr 23 18:41:13.212153 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:13.212047 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/51005d83-09ad-4736-8139-d5e0df38c1ac-sys\") pod \"perf-node-gather-daemonset-jtggn\" (UID: \"51005d83-09ad-4736-8139-d5e0df38c1ac\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-jtggn" Apr 23 18:41:13.212153 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:13.212076 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sp9xd\" (UniqueName: \"kubernetes.io/projected/51005d83-09ad-4736-8139-d5e0df38c1ac-kube-api-access-sp9xd\") pod \"perf-node-gather-daemonset-jtggn\" (UID: \"51005d83-09ad-4736-8139-d5e0df38c1ac\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-jtggn" Apr 23 18:41:13.212153 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:13.212109 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/51005d83-09ad-4736-8139-d5e0df38c1ac-proc\") pod \"perf-node-gather-daemonset-jtggn\" (UID: \"51005d83-09ad-4736-8139-d5e0df38c1ac\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-jtggn" Apr 23 18:41:13.212319 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:13.212170 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/51005d83-09ad-4736-8139-d5e0df38c1ac-podres\") pod \"perf-node-gather-daemonset-jtggn\" (UID: \"51005d83-09ad-4736-8139-d5e0df38c1ac\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-jtggn" Apr 23 18:41:13.212319 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:13.212201 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51005d83-09ad-4736-8139-d5e0df38c1ac-lib-modules\") pod \"perf-node-gather-daemonset-jtggn\" (UID: \"51005d83-09ad-4736-8139-d5e0df38c1ac\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-jtggn" Apr 23 18:41:13.212319 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:13.212282 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/51005d83-09ad-4736-8139-d5e0df38c1ac-proc\") pod \"perf-node-gather-daemonset-jtggn\" (UID: \"51005d83-09ad-4736-8139-d5e0df38c1ac\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-jtggn" Apr 23 18:41:13.212471 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:13.212326 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/51005d83-09ad-4736-8139-d5e0df38c1ac-podres\") pod \"perf-node-gather-daemonset-jtggn\" (UID: \"51005d83-09ad-4736-8139-d5e0df38c1ac\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-jtggn" Apr 23 18:41:13.212471 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:13.212339 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51005d83-09ad-4736-8139-d5e0df38c1ac-lib-modules\") pod \"perf-node-gather-daemonset-jtggn\" (UID: \"51005d83-09ad-4736-8139-d5e0df38c1ac\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-jtggn" Apr 23 18:41:13.220622 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:13.220605 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp9xd\" (UniqueName: \"kubernetes.io/projected/51005d83-09ad-4736-8139-d5e0df38c1ac-kube-api-access-sp9xd\") pod \"perf-node-gather-daemonset-jtggn\" (UID: \"51005d83-09ad-4736-8139-d5e0df38c1ac\") " pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-jtggn" Apr 23 18:41:13.278117 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:13.278089 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-jtggn" Apr 23 18:41:13.394471 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:13.394439 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hqhtf/perf-node-gather-daemonset-jtggn"] Apr 23 18:41:13.397892 ip-10-0-133-178 kubenswrapper[2572]: W0423 18:41:13.397862 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod51005d83_09ad_4736_8139_d5e0df38c1ac.slice/crio-a35b4e721c276d84a155c69ec5182578944d36c45c244cd85b2ba8f1b7668406 WatchSource:0}: Error finding container a35b4e721c276d84a155c69ec5182578944d36c45c244cd85b2ba8f1b7668406: Status 404 returned error can't find the container with id a35b4e721c276d84a155c69ec5182578944d36c45c244cd85b2ba8f1b7668406 Apr 23 18:41:13.399358 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:13.399343 2572 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 18:41:14.106875 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:14.106848 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-vlmvx_d451234e-ffc1-49bd-b43f-8b0057291cc5/dns/0.log" Apr 23 18:41:14.133110 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:14.133087 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-vlmvx_d451234e-ffc1-49bd-b43f-8b0057291cc5/kube-rbac-proxy/0.log" Apr 23 18:41:14.183370 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:14.183345 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-hxdgk_ae6f204b-0425-4e4c-8749-41bce4ec27bd/dns-node-resolver/0.log" Apr 23 18:41:14.237650 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:14.237620 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-jtggn" event={"ID":"51005d83-09ad-4736-8139-d5e0df38c1ac","Type":"ContainerStarted","Data":"b79c03a414bd79d88add95dfc29975db4a2bcd3c25c5a92923e3b88edb11da68"} Apr 23 18:41:14.237650 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:14.237653 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-jtggn" event={"ID":"51005d83-09ad-4736-8139-d5e0df38c1ac","Type":"ContainerStarted","Data":"a35b4e721c276d84a155c69ec5182578944d36c45c244cd85b2ba8f1b7668406"} Apr 23 18:41:14.237857 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:14.237741 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-jtggn" Apr 23 18:41:14.253021 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:14.252905 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-jtggn" podStartSLOduration=2.252891573 podStartE2EDuration="2.252891573s" podCreationTimestamp="2026-04-23 18:41:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:41:14.252070525 +0000 UTC m=+2949.446975252" watchObservedRunningTime="2026-04-23 18:41:14.252891573 +0000 UTC m=+2949.447796301" Apr 23 18:41:14.660044 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:14.660011 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-h9g78_c4c9afb7-fbe4-44de-b1b1-c6a1f86b72dd/node-ca/0.log" Apr 23 18:41:15.807659 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:15.807588 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-jljrn_9b33314e-4870-41bb-a49e-503d87fbf785/serve-healthcheck-canary/0.log" Apr 23 18:41:16.349477 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:16.349444 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-vn5xz_3cebb2bf-0419-4c26-b3b0-732d1737d1b3/kube-rbac-proxy/0.log" Apr 23 18:41:16.370759 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:16.370737 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-vn5xz_3cebb2bf-0419-4c26-b3b0-732d1737d1b3/exporter/0.log" Apr 23 18:41:16.392440 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:16.392411 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-vn5xz_3cebb2bf-0419-4c26-b3b0-732d1737d1b3/extractor/0.log" Apr 23 18:41:18.502614 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:18.502583 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_llmisvc-controller-manager-68cc5db7c4-n9pld_2c4bece2-d94d-4087-98bd-29e1c9e938fd/manager/0.log" Apr 23 18:41:20.249553 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:20.249528 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-hqhtf/perf-node-gather-daemonset-jtggn" Apr 23 18:41:24.305140 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:24.305103 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6brjb_30e5a914-97b2-4c21-985a-db4f9913ea08/kube-multus/0.log" Apr 23 18:41:24.500193 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:24.500162 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-f9ndr_012f7036-9d2e-45a6-985c-701982b85f46/kube-multus-additional-cni-plugins/0.log" Apr 23 18:41:24.522960 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:24.522936 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-f9ndr_012f7036-9d2e-45a6-985c-701982b85f46/egress-router-binary-copy/0.log" Apr 23 18:41:24.545143 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:24.545120 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-f9ndr_012f7036-9d2e-45a6-985c-701982b85f46/cni-plugins/0.log" Apr 23 18:41:24.582800 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:24.582772 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-f9ndr_012f7036-9d2e-45a6-985c-701982b85f46/bond-cni-plugin/0.log" Apr 23 18:41:24.628514 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:24.628477 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-f9ndr_012f7036-9d2e-45a6-985c-701982b85f46/routeoverride-cni/0.log" Apr 23 18:41:24.656668 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:24.656643 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-f9ndr_012f7036-9d2e-45a6-985c-701982b85f46/whereabouts-cni-bincopy/0.log" Apr 23 18:41:24.677855 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:24.677832 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-f9ndr_012f7036-9d2e-45a6-985c-701982b85f46/whereabouts-cni/0.log" Apr 23 18:41:24.998965 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:24.998935 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-lm6wc_f7526a98-a284-45c2-aeb2-cce4ddcd8f45/network-metrics-daemon/0.log" Apr 23 18:41:25.022948 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:25.022928 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-lm6wc_f7526a98-a284-45c2-aeb2-cce4ddcd8f45/kube-rbac-proxy/0.log" Apr 23 18:41:26.382557 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:26.382518 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9pcc_3c2da17f-0591-4850-9fa2-fde2a8c1a8d5/ovn-controller/0.log" Apr 23 18:41:26.403203 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:26.403171 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9pcc_3c2da17f-0591-4850-9fa2-fde2a8c1a8d5/ovn-acl-logging/0.log" Apr 23 18:41:26.414610 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:26.414587 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9pcc_3c2da17f-0591-4850-9fa2-fde2a8c1a8d5/ovn-acl-logging/1.log" Apr 23 18:41:26.433778 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:26.433750 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9pcc_3c2da17f-0591-4850-9fa2-fde2a8c1a8d5/kube-rbac-proxy-node/0.log" Apr 23 18:41:26.457537 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:26.457513 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9pcc_3c2da17f-0591-4850-9fa2-fde2a8c1a8d5/kube-rbac-proxy-ovn-metrics/0.log" Apr 23 18:41:26.484734 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:26.484715 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9pcc_3c2da17f-0591-4850-9fa2-fde2a8c1a8d5/northd/0.log" Apr 23 18:41:26.510144 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:26.510123 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9pcc_3c2da17f-0591-4850-9fa2-fde2a8c1a8d5/nbdb/0.log" Apr 23 18:41:26.539029 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:26.539009 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9pcc_3c2da17f-0591-4850-9fa2-fde2a8c1a8d5/sbdb/0.log" Apr 23 18:41:26.645815 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:26.645743 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9pcc_3c2da17f-0591-4850-9fa2-fde2a8c1a8d5/ovnkube-controller/0.log" Apr 23 18:41:27.701985 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:27.701955 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-target-9wq98_4b5c0501-ab5e-4cac-9c9f-f306624ec47f/network-check-target-container/0.log" Apr 23 18:41:28.638154 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:28.638127 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_iptables-alerter-4wmwg_095aaf33-9f06-4dd6-ab66-144f189b570f/iptables-alerter/0.log" Apr 23 18:41:29.379752 ip-10-0-133-178 kubenswrapper[2572]: I0423 18:41:29.379726 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_tuned-9455k_cc1881ec-f1a3-4551-ac37-e01f270956dc/tuned/0.log"