Apr 16 18:13:48.438951 ip-10-0-141-219 systemd[1]: kubelet.service: Failed to load environment files: No such file or directory Apr 16 18:13:48.438964 ip-10-0-141-219 systemd[1]: kubelet.service: Failed to run 'start-pre' task: No such file or directory Apr 16 18:13:48.438974 ip-10-0-141-219 systemd[1]: kubelet.service: Failed with result 'resources'. Apr 16 18:13:48.439304 ip-10-0-141-219 systemd[1]: Failed to start Kubernetes Kubelet. Apr 16 18:13:58.624390 ip-10-0-141-219 systemd[1]: kubelet.service: Failed to schedule restart job: Unit crio.service not found. Apr 16 18:13:58.624413 ip-10-0-141-219 systemd[1]: kubelet.service: Failed with result 'resources'. -- Boot b8e7af62453a42a38f482a2ea5d2a899 -- Apr 16 18:16:09.074564 ip-10-0-141-219 systemd[1]: Starting Kubernetes Kubelet... Apr 16 18:16:09.534791 ip-10-0-141-219 kubenswrapper[2570]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 18:16:09.534791 ip-10-0-141-219 kubenswrapper[2570]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Apr 16 18:16:09.534791 ip-10-0-141-219 kubenswrapper[2570]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 18:16:09.534791 ip-10-0-141-219 kubenswrapper[2570]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 18:16:09.534791 ip-10-0-141-219 kubenswrapper[2570]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 18:16:09.537891 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.537794 2570 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 18:16:09.542019 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542003 2570 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 18:16:09.542019 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542019 2570 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 18:16:09.542089 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542023 2570 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 18:16:09.542089 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542026 2570 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 18:16:09.542089 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542029 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 18:16:09.542089 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542032 2570 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 18:16:09.542089 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542035 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 18:16:09.542089 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542038 2570 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 18:16:09.542089 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542041 2570 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 18:16:09.542089 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542043 2570 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 18:16:09.542089 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542047 2570 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 18:16:09.542089 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542049 2570 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 18:16:09.542089 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542052 2570 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 18:16:09.542089 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542054 2570 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 18:16:09.542089 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542057 2570 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 18:16:09.542089 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542060 2570 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 18:16:09.542089 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542071 2570 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 18:16:09.542089 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542074 2570 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 18:16:09.542089 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542077 2570 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 18:16:09.542089 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542080 2570 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 18:16:09.542089 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542082 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 18:16:09.542089 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542085 2570 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 18:16:09.542612 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542088 2570 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 18:16:09.542612 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542090 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 18:16:09.542612 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542093 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 18:16:09.542612 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542097 2570 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 18:16:09.542612 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542101 2570 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 18:16:09.542612 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542104 2570 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 18:16:09.542612 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542107 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 18:16:09.542612 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542110 2570 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 18:16:09.542612 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542114 2570 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 18:16:09.542612 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542117 2570 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 18:16:09.542612 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542120 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 18:16:09.542612 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542123 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 18:16:09.542612 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542126 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 18:16:09.542612 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542129 2570 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 18:16:09.542612 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542132 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 18:16:09.542612 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542134 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 18:16:09.542612 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542137 2570 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 18:16:09.542612 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542139 2570 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 18:16:09.542612 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542142 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 18:16:09.543097 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542144 2570 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 18:16:09.543097 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542147 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 18:16:09.543097 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542149 2570 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 18:16:09.543097 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542152 2570 feature_gate.go:328] unrecognized feature gate: Example Apr 16 18:16:09.543097 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542156 2570 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 18:16:09.543097 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542158 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 18:16:09.543097 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542161 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 18:16:09.543097 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542164 2570 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 18:16:09.543097 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542166 2570 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 18:16:09.543097 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542168 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 18:16:09.543097 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542171 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 18:16:09.543097 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542173 2570 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 18:16:09.543097 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542176 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 18:16:09.543097 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542178 2570 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 18:16:09.543097 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542183 2570 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 18:16:09.543097 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542187 2570 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 18:16:09.543097 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542191 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 18:16:09.543097 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542194 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 18:16:09.543097 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542198 2570 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 18:16:09.543097 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542201 2570 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 18:16:09.543695 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542205 2570 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 18:16:09.543695 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542207 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 18:16:09.543695 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542210 2570 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 18:16:09.543695 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542213 2570 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 18:16:09.543695 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542215 2570 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 18:16:09.543695 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542238 2570 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 18:16:09.543695 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542243 2570 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 18:16:09.543695 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542245 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 18:16:09.543695 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542248 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 18:16:09.543695 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542251 2570 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 18:16:09.543695 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542253 2570 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 18:16:09.543695 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542256 2570 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 18:16:09.543695 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542258 2570 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 18:16:09.543695 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542261 2570 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 18:16:09.543695 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542264 2570 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 18:16:09.543695 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542266 2570 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 18:16:09.543695 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542269 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 18:16:09.543695 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542271 2570 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 18:16:09.543695 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542274 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 18:16:09.543695 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542277 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 18:16:09.544183 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542279 2570 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 18:16:09.544183 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542282 2570 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 18:16:09.544183 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542284 2570 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 18:16:09.544183 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542287 2570 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 18:16:09.544183 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542289 2570 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 18:16:09.544183 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542683 2570 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 18:16:09.544183 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542689 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 18:16:09.544183 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542692 2570 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 18:16:09.544183 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542695 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 18:16:09.544183 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542698 2570 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 18:16:09.544183 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542700 2570 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 18:16:09.544183 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542703 2570 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 18:16:09.544183 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542705 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 18:16:09.544183 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542708 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 18:16:09.544183 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542711 2570 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 18:16:09.544183 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542713 2570 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 18:16:09.544183 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542716 2570 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 18:16:09.544183 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542719 2570 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 18:16:09.544183 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542721 2570 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 18:16:09.544183 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542724 2570 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 18:16:09.544697 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542727 2570 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 18:16:09.544697 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542729 2570 feature_gate.go:328] unrecognized feature gate: Example Apr 16 18:16:09.544697 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542732 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 18:16:09.544697 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542734 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 18:16:09.544697 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542737 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 18:16:09.544697 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542739 2570 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 18:16:09.544697 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542742 2570 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 18:16:09.544697 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542745 2570 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 18:16:09.544697 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542747 2570 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 18:16:09.544697 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542750 2570 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 18:16:09.544697 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542752 2570 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 18:16:09.544697 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542754 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 18:16:09.544697 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542757 2570 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 18:16:09.544697 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542759 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 18:16:09.544697 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542762 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 18:16:09.544697 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542765 2570 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 18:16:09.544697 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542767 2570 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 18:16:09.544697 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542770 2570 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 18:16:09.544697 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542774 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 18:16:09.544697 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542777 2570 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 18:16:09.545188 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542779 2570 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 18:16:09.545188 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542782 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 18:16:09.545188 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542784 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 18:16:09.545188 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542787 2570 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 18:16:09.545188 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542789 2570 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 18:16:09.545188 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542794 2570 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 18:16:09.545188 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542797 2570 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 18:16:09.545188 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542800 2570 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 18:16:09.545188 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542803 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 18:16:09.545188 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542806 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 18:16:09.545188 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542809 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 18:16:09.545188 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542813 2570 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 18:16:09.545188 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542815 2570 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 18:16:09.545188 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542818 2570 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 18:16:09.545188 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542820 2570 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 18:16:09.545188 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542824 2570 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 18:16:09.545188 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542828 2570 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 18:16:09.545188 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542830 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 18:16:09.545188 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542833 2570 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 18:16:09.545684 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542836 2570 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 18:16:09.545684 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542838 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 18:16:09.545684 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542841 2570 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 18:16:09.545684 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542843 2570 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 18:16:09.545684 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542846 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 18:16:09.545684 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542849 2570 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 18:16:09.545684 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542851 2570 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 18:16:09.545684 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542854 2570 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 18:16:09.545684 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542856 2570 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 18:16:09.545684 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542858 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 18:16:09.545684 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542861 2570 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 18:16:09.545684 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542864 2570 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 18:16:09.545684 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542867 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 18:16:09.545684 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542869 2570 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 18:16:09.545684 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542872 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 18:16:09.545684 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542874 2570 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 18:16:09.545684 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542877 2570 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 18:16:09.545684 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542881 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 18:16:09.545684 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542883 2570 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 18:16:09.545684 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542885 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 18:16:09.546187 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542888 2570 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 18:16:09.546187 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542890 2570 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 18:16:09.546187 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542893 2570 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 18:16:09.546187 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542896 2570 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 18:16:09.546187 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542898 2570 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 18:16:09.546187 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542901 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 18:16:09.546187 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542903 2570 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 18:16:09.546187 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542906 2570 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 18:16:09.546187 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542908 2570 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 18:16:09.546187 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542911 2570 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 18:16:09.546187 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542913 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 18:16:09.546187 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.542916 2570 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 18:16:09.546187 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.542996 2570 flags.go:64] FLAG: --address="0.0.0.0" Apr 16 18:16:09.546187 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543004 2570 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Apr 16 18:16:09.546187 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543011 2570 flags.go:64] FLAG: --anonymous-auth="true" Apr 16 18:16:09.546187 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543016 2570 flags.go:64] FLAG: --application-metrics-count-limit="100" Apr 16 18:16:09.546187 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543020 2570 flags.go:64] FLAG: --authentication-token-webhook="false" Apr 16 18:16:09.546187 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543023 2570 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Apr 16 18:16:09.546187 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543028 2570 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Apr 16 18:16:09.546187 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543038 2570 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Apr 16 18:16:09.546187 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543041 2570 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Apr 16 18:16:09.546743 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543044 2570 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Apr 16 18:16:09.546743 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543048 2570 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Apr 16 18:16:09.546743 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543055 2570 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Apr 16 18:16:09.546743 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543059 2570 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Apr 16 18:16:09.546743 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543062 2570 flags.go:64] FLAG: --cgroup-root="" Apr 16 18:16:09.546743 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543065 2570 flags.go:64] FLAG: --cgroups-per-qos="true" Apr 16 18:16:09.546743 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543068 2570 flags.go:64] FLAG: --client-ca-file="" Apr 16 18:16:09.546743 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543070 2570 flags.go:64] FLAG: --cloud-config="" Apr 16 18:16:09.546743 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543074 2570 flags.go:64] FLAG: --cloud-provider="external" Apr 16 18:16:09.546743 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543077 2570 flags.go:64] FLAG: --cluster-dns="[]" Apr 16 18:16:09.546743 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543081 2570 flags.go:64] FLAG: --cluster-domain="" Apr 16 18:16:09.546743 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543084 2570 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Apr 16 18:16:09.546743 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543087 2570 flags.go:64] FLAG: --config-dir="" Apr 16 18:16:09.546743 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543090 2570 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Apr 16 18:16:09.546743 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543093 2570 flags.go:64] FLAG: --container-log-max-files="5" Apr 16 18:16:09.546743 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543097 2570 flags.go:64] FLAG: --container-log-max-size="10Mi" Apr 16 18:16:09.546743 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543101 2570 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Apr 16 18:16:09.546743 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543105 2570 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Apr 16 18:16:09.546743 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543109 2570 flags.go:64] FLAG: --containerd-namespace="k8s.io" Apr 16 18:16:09.546743 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543112 2570 flags.go:64] FLAG: --contention-profiling="false" Apr 16 18:16:09.546743 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543115 2570 flags.go:64] FLAG: --cpu-cfs-quota="true" Apr 16 18:16:09.546743 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543118 2570 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Apr 16 18:16:09.546743 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543121 2570 flags.go:64] FLAG: --cpu-manager-policy="none" Apr 16 18:16:09.546743 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543124 2570 flags.go:64] FLAG: --cpu-manager-policy-options="" Apr 16 18:16:09.546743 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543128 2570 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Apr 16 18:16:09.547357 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543131 2570 flags.go:64] FLAG: --enable-controller-attach-detach="true" Apr 16 18:16:09.547357 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543134 2570 flags.go:64] FLAG: --enable-debugging-handlers="true" Apr 16 18:16:09.547357 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543137 2570 flags.go:64] FLAG: --enable-load-reader="false" Apr 16 18:16:09.547357 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543140 2570 flags.go:64] FLAG: --enable-server="true" Apr 16 18:16:09.547357 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543143 2570 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Apr 16 18:16:09.547357 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543149 2570 flags.go:64] FLAG: --event-burst="100" Apr 16 18:16:09.547357 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543152 2570 flags.go:64] FLAG: --event-qps="50" Apr 16 18:16:09.547357 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543156 2570 flags.go:64] FLAG: --event-storage-age-limit="default=0" Apr 16 18:16:09.547357 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543159 2570 flags.go:64] FLAG: --event-storage-event-limit="default=0" Apr 16 18:16:09.547357 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543164 2570 flags.go:64] FLAG: --eviction-hard="" Apr 16 18:16:09.547357 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543168 2570 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Apr 16 18:16:09.547357 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543170 2570 flags.go:64] FLAG: --eviction-minimum-reclaim="" Apr 16 18:16:09.547357 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543173 2570 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Apr 16 18:16:09.547357 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543176 2570 flags.go:64] FLAG: --eviction-soft="" Apr 16 18:16:09.547357 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543180 2570 flags.go:64] FLAG: --eviction-soft-grace-period="" Apr 16 18:16:09.547357 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543183 2570 flags.go:64] FLAG: --exit-on-lock-contention="false" Apr 16 18:16:09.547357 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543186 2570 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Apr 16 18:16:09.547357 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543189 2570 flags.go:64] FLAG: --experimental-mounter-path="" Apr 16 18:16:09.547357 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543191 2570 flags.go:64] FLAG: --fail-cgroupv1="false" Apr 16 18:16:09.547357 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543194 2570 flags.go:64] FLAG: --fail-swap-on="true" Apr 16 18:16:09.547357 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543197 2570 flags.go:64] FLAG: --feature-gates="" Apr 16 18:16:09.547357 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543201 2570 flags.go:64] FLAG: --file-check-frequency="20s" Apr 16 18:16:09.547357 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543204 2570 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Apr 16 18:16:09.547357 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543208 2570 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Apr 16 18:16:09.547357 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543213 2570 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Apr 16 18:16:09.547998 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543216 2570 flags.go:64] FLAG: --healthz-port="10248" Apr 16 18:16:09.547998 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543219 2570 flags.go:64] FLAG: --help="false" Apr 16 18:16:09.547998 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543222 2570 flags.go:64] FLAG: --hostname-override="ip-10-0-141-219.ec2.internal" Apr 16 18:16:09.547998 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543238 2570 flags.go:64] FLAG: --housekeeping-interval="10s" Apr 16 18:16:09.547998 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543241 2570 flags.go:64] FLAG: --http-check-frequency="20s" Apr 16 18:16:09.547998 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543244 2570 flags.go:64] FLAG: --image-credential-provider-bin-dir="/usr/libexec/kubelet-image-credential-provider-plugins" Apr 16 18:16:09.547998 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543247 2570 flags.go:64] FLAG: --image-credential-provider-config="/etc/kubernetes/credential-providers/ecr-credential-provider.yaml" Apr 16 18:16:09.547998 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543251 2570 flags.go:64] FLAG: --image-gc-high-threshold="85" Apr 16 18:16:09.547998 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543254 2570 flags.go:64] FLAG: --image-gc-low-threshold="80" Apr 16 18:16:09.547998 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543257 2570 flags.go:64] FLAG: --image-service-endpoint="" Apr 16 18:16:09.547998 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543260 2570 flags.go:64] FLAG: --kernel-memcg-notification="false" Apr 16 18:16:09.547998 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543262 2570 flags.go:64] FLAG: --kube-api-burst="100" Apr 16 18:16:09.547998 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543265 2570 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Apr 16 18:16:09.547998 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543269 2570 flags.go:64] FLAG: --kube-api-qps="50" Apr 16 18:16:09.547998 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543271 2570 flags.go:64] FLAG: --kube-reserved="" Apr 16 18:16:09.547998 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543274 2570 flags.go:64] FLAG: --kube-reserved-cgroup="" Apr 16 18:16:09.547998 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543279 2570 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Apr 16 18:16:09.547998 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543282 2570 flags.go:64] FLAG: --kubelet-cgroups="" Apr 16 18:16:09.547998 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543285 2570 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Apr 16 18:16:09.547998 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543288 2570 flags.go:64] FLAG: --lock-file="" Apr 16 18:16:09.547998 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543291 2570 flags.go:64] FLAG: --log-cadvisor-usage="false" Apr 16 18:16:09.547998 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543294 2570 flags.go:64] FLAG: --log-flush-frequency="5s" Apr 16 18:16:09.547998 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543297 2570 flags.go:64] FLAG: --log-json-info-buffer-size="0" Apr 16 18:16:09.547998 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543303 2570 flags.go:64] FLAG: --log-json-split-stream="false" Apr 16 18:16:09.548589 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543306 2570 flags.go:64] FLAG: --log-text-info-buffer-size="0" Apr 16 18:16:09.548589 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543309 2570 flags.go:64] FLAG: --log-text-split-stream="false" Apr 16 18:16:09.548589 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543312 2570 flags.go:64] FLAG: --logging-format="text" Apr 16 18:16:09.548589 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543315 2570 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Apr 16 18:16:09.548589 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543319 2570 flags.go:64] FLAG: --make-iptables-util-chains="true" Apr 16 18:16:09.548589 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543322 2570 flags.go:64] FLAG: --manifest-url="" Apr 16 18:16:09.548589 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543325 2570 flags.go:64] FLAG: --manifest-url-header="" Apr 16 18:16:09.548589 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543333 2570 flags.go:64] FLAG: --max-housekeeping-interval="15s" Apr 16 18:16:09.548589 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543337 2570 flags.go:64] FLAG: --max-open-files="1000000" Apr 16 18:16:09.548589 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543341 2570 flags.go:64] FLAG: --max-pods="110" Apr 16 18:16:09.548589 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543344 2570 flags.go:64] FLAG: --maximum-dead-containers="-1" Apr 16 18:16:09.548589 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543347 2570 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Apr 16 18:16:09.548589 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543350 2570 flags.go:64] FLAG: --memory-manager-policy="None" Apr 16 18:16:09.548589 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543353 2570 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Apr 16 18:16:09.548589 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543356 2570 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Apr 16 18:16:09.548589 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543359 2570 flags.go:64] FLAG: --node-ip="0.0.0.0" Apr 16 18:16:09.548589 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543362 2570 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhel" Apr 16 18:16:09.548589 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543369 2570 flags.go:64] FLAG: --node-status-max-images="50" Apr 16 18:16:09.548589 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543372 2570 flags.go:64] FLAG: --node-status-update-frequency="10s" Apr 16 18:16:09.548589 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543376 2570 flags.go:64] FLAG: --oom-score-adj="-999" Apr 16 18:16:09.548589 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543379 2570 flags.go:64] FLAG: --pod-cidr="" Apr 16 18:16:09.548589 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543382 2570 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc76bab72f320de3d4105c90d73c4fb139c09e20ce0fa8dcbc0cb59920d27dec" Apr 16 18:16:09.548589 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543388 2570 flags.go:64] FLAG: --pod-manifest-path="" Apr 16 18:16:09.549145 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543391 2570 flags.go:64] FLAG: --pod-max-pids="-1" Apr 16 18:16:09.549145 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543394 2570 flags.go:64] FLAG: --pods-per-core="0" Apr 16 18:16:09.549145 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543399 2570 flags.go:64] FLAG: --port="10250" Apr 16 18:16:09.549145 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543402 2570 flags.go:64] FLAG: --protect-kernel-defaults="false" Apr 16 18:16:09.549145 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543405 2570 flags.go:64] FLAG: --provider-id="aws:///us-east-1a/i-0025d601eb7781fb3" Apr 16 18:16:09.549145 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543409 2570 flags.go:64] FLAG: --qos-reserved="" Apr 16 18:16:09.549145 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543412 2570 flags.go:64] FLAG: --read-only-port="10255" Apr 16 18:16:09.549145 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543415 2570 flags.go:64] FLAG: --register-node="true" Apr 16 18:16:09.549145 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543417 2570 flags.go:64] FLAG: --register-schedulable="true" Apr 16 18:16:09.549145 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543421 2570 flags.go:64] FLAG: --register-with-taints="" Apr 16 18:16:09.549145 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543424 2570 flags.go:64] FLAG: --registry-burst="10" Apr 16 18:16:09.549145 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543427 2570 flags.go:64] FLAG: --registry-qps="5" Apr 16 18:16:09.549145 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543430 2570 flags.go:64] FLAG: --reserved-cpus="" Apr 16 18:16:09.549145 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543433 2570 flags.go:64] FLAG: --reserved-memory="" Apr 16 18:16:09.549145 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543437 2570 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Apr 16 18:16:09.549145 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543441 2570 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Apr 16 18:16:09.549145 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543445 2570 flags.go:64] FLAG: --rotate-certificates="false" Apr 16 18:16:09.549145 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543448 2570 flags.go:64] FLAG: --rotate-server-certificates="false" Apr 16 18:16:09.549145 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543451 2570 flags.go:64] FLAG: --runonce="false" Apr 16 18:16:09.549145 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543454 2570 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Apr 16 18:16:09.549145 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543458 2570 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Apr 16 18:16:09.549145 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543461 2570 flags.go:64] FLAG: --seccomp-default="false" Apr 16 18:16:09.549145 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543464 2570 flags.go:64] FLAG: --serialize-image-pulls="true" Apr 16 18:16:09.549145 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543467 2570 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Apr 16 18:16:09.549145 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543470 2570 flags.go:64] FLAG: --storage-driver-db="cadvisor" Apr 16 18:16:09.549145 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543473 2570 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Apr 16 18:16:09.549868 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543476 2570 flags.go:64] FLAG: --storage-driver-password="root" Apr 16 18:16:09.549868 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543479 2570 flags.go:64] FLAG: --storage-driver-secure="false" Apr 16 18:16:09.549868 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543482 2570 flags.go:64] FLAG: --storage-driver-table="stats" Apr 16 18:16:09.549868 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543484 2570 flags.go:64] FLAG: --storage-driver-user="root" Apr 16 18:16:09.549868 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543487 2570 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Apr 16 18:16:09.549868 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543491 2570 flags.go:64] FLAG: --sync-frequency="1m0s" Apr 16 18:16:09.549868 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543494 2570 flags.go:64] FLAG: --system-cgroups="" Apr 16 18:16:09.549868 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543496 2570 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Apr 16 18:16:09.549868 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543503 2570 flags.go:64] FLAG: --system-reserved-cgroup="" Apr 16 18:16:09.549868 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543506 2570 flags.go:64] FLAG: --tls-cert-file="" Apr 16 18:16:09.549868 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543509 2570 flags.go:64] FLAG: --tls-cipher-suites="[]" Apr 16 18:16:09.549868 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543513 2570 flags.go:64] FLAG: --tls-min-version="" Apr 16 18:16:09.549868 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543516 2570 flags.go:64] FLAG: --tls-private-key-file="" Apr 16 18:16:09.549868 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543519 2570 flags.go:64] FLAG: --topology-manager-policy="none" Apr 16 18:16:09.549868 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543521 2570 flags.go:64] FLAG: --topology-manager-policy-options="" Apr 16 18:16:09.549868 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543525 2570 flags.go:64] FLAG: --topology-manager-scope="container" Apr 16 18:16:09.549868 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543528 2570 flags.go:64] FLAG: --v="2" Apr 16 18:16:09.549868 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543532 2570 flags.go:64] FLAG: --version="false" Apr 16 18:16:09.549868 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543537 2570 flags.go:64] FLAG: --vmodule="" Apr 16 18:16:09.549868 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543541 2570 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Apr 16 18:16:09.549868 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.543545 2570 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Apr 16 18:16:09.549868 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543640 2570 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 18:16:09.549868 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543646 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 18:16:09.549868 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543649 2570 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 18:16:09.550464 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543654 2570 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 18:16:09.550464 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543658 2570 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 18:16:09.550464 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543661 2570 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 18:16:09.550464 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543664 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 18:16:09.550464 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543666 2570 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 18:16:09.550464 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543669 2570 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 18:16:09.550464 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543672 2570 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 18:16:09.550464 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543674 2570 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 18:16:09.550464 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543677 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 18:16:09.550464 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543679 2570 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 18:16:09.550464 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543682 2570 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 18:16:09.550464 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543684 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 18:16:09.550464 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543687 2570 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 18:16:09.550464 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543689 2570 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 18:16:09.550464 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543692 2570 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 18:16:09.550464 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543695 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 18:16:09.550464 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543698 2570 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 18:16:09.550464 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543701 2570 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 18:16:09.550464 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543704 2570 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 18:16:09.550464 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543707 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 18:16:09.550971 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543710 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 18:16:09.550971 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543712 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 18:16:09.550971 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543715 2570 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 18:16:09.550971 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543717 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 18:16:09.550971 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543720 2570 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 18:16:09.550971 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543722 2570 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 18:16:09.550971 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543725 2570 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 18:16:09.550971 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543728 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 18:16:09.550971 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543731 2570 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 18:16:09.550971 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543733 2570 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 18:16:09.550971 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543736 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 18:16:09.550971 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543739 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 18:16:09.550971 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543742 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 18:16:09.550971 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543745 2570 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 18:16:09.550971 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543749 2570 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 18:16:09.550971 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543751 2570 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 18:16:09.550971 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543754 2570 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 18:16:09.550971 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543756 2570 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 18:16:09.550971 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543759 2570 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 18:16:09.551452 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543761 2570 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 18:16:09.551452 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543764 2570 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 18:16:09.551452 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543767 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 18:16:09.551452 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543769 2570 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 18:16:09.551452 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543772 2570 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 18:16:09.551452 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543774 2570 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 18:16:09.551452 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543777 2570 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 18:16:09.551452 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543779 2570 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 18:16:09.551452 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543781 2570 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 18:16:09.551452 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543785 2570 feature_gate.go:328] unrecognized feature gate: Example Apr 16 18:16:09.551452 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543788 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 18:16:09.551452 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543790 2570 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 18:16:09.551452 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543793 2570 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 18:16:09.551452 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543795 2570 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 18:16:09.551452 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543799 2570 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 18:16:09.551452 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543803 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 18:16:09.551452 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543806 2570 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 18:16:09.551452 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543809 2570 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 18:16:09.551452 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543811 2570 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 18:16:09.551909 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543814 2570 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 18:16:09.551909 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543817 2570 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 18:16:09.551909 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543819 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 18:16:09.551909 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543822 2570 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 18:16:09.551909 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543826 2570 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 18:16:09.551909 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543829 2570 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 18:16:09.551909 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543831 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 18:16:09.551909 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543834 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 18:16:09.551909 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543836 2570 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 18:16:09.551909 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543839 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 18:16:09.551909 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543842 2570 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 18:16:09.551909 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543844 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 18:16:09.551909 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543847 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 18:16:09.551909 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543849 2570 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 18:16:09.551909 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543852 2570 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 18:16:09.551909 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543854 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 18:16:09.551909 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543857 2570 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 18:16:09.551909 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543859 2570 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 18:16:09.551909 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543862 2570 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 18:16:09.551909 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543864 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 18:16:09.552432 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543867 2570 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 18:16:09.552432 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543869 2570 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 18:16:09.552432 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543874 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 18:16:09.552432 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543876 2570 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 18:16:09.552432 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.543879 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 18:16:09.552432 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.544522 2570 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 16 18:16:09.552432 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.551377 2570 server.go:530] "Kubelet version" kubeletVersion="v1.33.9" Apr 16 18:16:09.552432 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.551398 2570 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 18:16:09.552432 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551446 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 18:16:09.552432 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551452 2570 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 18:16:09.552432 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551459 2570 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 18:16:09.552432 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551464 2570 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 18:16:09.552432 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551468 2570 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 18:16:09.552432 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551473 2570 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 18:16:09.552432 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551477 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 18:16:09.552432 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551480 2570 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 18:16:09.552876 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551483 2570 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 18:16:09.552876 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551485 2570 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 18:16:09.552876 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551488 2570 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 18:16:09.552876 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551491 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 18:16:09.552876 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551494 2570 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 18:16:09.552876 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551497 2570 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 18:16:09.552876 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551499 2570 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 18:16:09.552876 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551502 2570 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 18:16:09.552876 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551505 2570 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 18:16:09.552876 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551508 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 18:16:09.552876 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551510 2570 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 18:16:09.552876 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551513 2570 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 18:16:09.552876 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551516 2570 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 18:16:09.552876 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551518 2570 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 18:16:09.552876 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551521 2570 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 18:16:09.552876 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551523 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 18:16:09.552876 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551526 2570 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 18:16:09.552876 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551529 2570 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 18:16:09.552876 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551531 2570 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 18:16:09.553441 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551534 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 18:16:09.553441 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551537 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 18:16:09.553441 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551540 2570 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 18:16:09.553441 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551545 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 18:16:09.553441 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551551 2570 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 18:16:09.553441 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551554 2570 feature_gate.go:328] unrecognized feature gate: Example Apr 16 18:16:09.553441 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551557 2570 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 18:16:09.553441 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551560 2570 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 18:16:09.553441 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551563 2570 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 18:16:09.553441 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551565 2570 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 18:16:09.553441 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551567 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 18:16:09.553441 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551570 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 18:16:09.553441 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551572 2570 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 18:16:09.553441 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551575 2570 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 18:16:09.553441 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551579 2570 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 18:16:09.553441 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551583 2570 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 18:16:09.553441 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551586 2570 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 18:16:09.553441 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551588 2570 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 18:16:09.553441 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551591 2570 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 18:16:09.553906 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551593 2570 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 18:16:09.553906 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551595 2570 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 18:16:09.553906 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551598 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 18:16:09.553906 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551601 2570 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 18:16:09.553906 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551604 2570 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 18:16:09.553906 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551606 2570 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 18:16:09.553906 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551609 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 18:16:09.553906 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551611 2570 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 18:16:09.553906 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551614 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 18:16:09.553906 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551617 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 18:16:09.553906 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551621 2570 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 18:16:09.553906 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551625 2570 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 18:16:09.553906 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551629 2570 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 18:16:09.553906 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551632 2570 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 18:16:09.553906 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551635 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 18:16:09.553906 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551637 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 18:16:09.553906 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551640 2570 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 18:16:09.553906 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551643 2570 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 18:16:09.553906 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551647 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 18:16:09.553906 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551650 2570 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 18:16:09.554403 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551653 2570 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 18:16:09.554403 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551656 2570 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 18:16:09.554403 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551658 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 18:16:09.554403 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551661 2570 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 18:16:09.554403 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551664 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 18:16:09.554403 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551666 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 18:16:09.554403 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551669 2570 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 18:16:09.554403 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551671 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 18:16:09.554403 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551674 2570 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 18:16:09.554403 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551676 2570 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 18:16:09.554403 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551679 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 18:16:09.554403 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551682 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 18:16:09.554403 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551684 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 18:16:09.554403 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551687 2570 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 18:16:09.554403 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551689 2570 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 18:16:09.554403 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551692 2570 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 18:16:09.554403 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551696 2570 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 18:16:09.554403 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551701 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 18:16:09.554403 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551705 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 18:16:09.554403 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551708 2570 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 18:16:09.554888 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.551713 2570 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 16 18:16:09.554888 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551820 2570 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 18:16:09.554888 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551825 2570 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 18:16:09.554888 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551828 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 18:16:09.554888 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551831 2570 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 18:16:09.554888 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551834 2570 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 18:16:09.554888 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551837 2570 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 18:16:09.554888 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551840 2570 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 18:16:09.554888 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551843 2570 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 18:16:09.554888 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551848 2570 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 18:16:09.554888 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551852 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 18:16:09.554888 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551857 2570 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 18:16:09.554888 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551861 2570 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 18:16:09.554888 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551863 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 18:16:09.554888 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551866 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 18:16:09.555321 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551868 2570 feature_gate.go:328] unrecognized feature gate: Example Apr 16 18:16:09.555321 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551871 2570 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 18:16:09.555321 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551873 2570 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 18:16:09.555321 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551876 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 18:16:09.555321 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551879 2570 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 18:16:09.555321 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551882 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 18:16:09.555321 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551884 2570 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 18:16:09.555321 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551887 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 18:16:09.555321 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551890 2570 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 18:16:09.555321 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551892 2570 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 18:16:09.555321 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551894 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 18:16:09.555321 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551897 2570 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 18:16:09.555321 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551900 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 18:16:09.555321 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551902 2570 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 18:16:09.555321 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551904 2570 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 18:16:09.555321 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551907 2570 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 18:16:09.555321 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551910 2570 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 18:16:09.555321 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551913 2570 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 18:16:09.555321 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551916 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 18:16:09.555321 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551918 2570 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 18:16:09.555803 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551921 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 18:16:09.555803 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551925 2570 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 18:16:09.555803 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551929 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 18:16:09.555803 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551935 2570 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 18:16:09.555803 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551939 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 18:16:09.555803 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551942 2570 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 18:16:09.555803 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551944 2570 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 18:16:09.555803 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551947 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 18:16:09.555803 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551950 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 18:16:09.555803 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551953 2570 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 18:16:09.555803 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551956 2570 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 18:16:09.555803 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551958 2570 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 18:16:09.555803 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551961 2570 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 18:16:09.555803 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551963 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 18:16:09.555803 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551966 2570 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 18:16:09.555803 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551968 2570 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 18:16:09.555803 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551971 2570 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 18:16:09.555803 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551973 2570 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 18:16:09.555803 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551976 2570 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 18:16:09.555803 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551978 2570 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 18:16:09.556365 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551981 2570 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 18:16:09.556365 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551984 2570 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 18:16:09.556365 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551986 2570 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 18:16:09.556365 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551988 2570 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 18:16:09.556365 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551991 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 18:16:09.556365 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551993 2570 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 18:16:09.556365 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551996 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 18:16:09.556365 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.551998 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 18:16:09.556365 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.552002 2570 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 18:16:09.556365 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.552006 2570 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 18:16:09.556365 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.552010 2570 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 18:16:09.556365 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.552014 2570 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 18:16:09.556365 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.552017 2570 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 18:16:09.556365 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.552021 2570 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 18:16:09.556365 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.552025 2570 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 18:16:09.556365 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.552028 2570 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 18:16:09.556365 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.552031 2570 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 18:16:09.556365 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.552034 2570 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 18:16:09.556365 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.552037 2570 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 18:16:09.556896 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.552040 2570 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 18:16:09.556896 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.552044 2570 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 18:16:09.556896 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.552046 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 18:16:09.556896 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.552050 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 18:16:09.556896 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.552052 2570 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 18:16:09.556896 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.552055 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 18:16:09.556896 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.552057 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 18:16:09.556896 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.552060 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 18:16:09.556896 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.552062 2570 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 18:16:09.556896 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.552065 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 18:16:09.556896 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.552067 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 18:16:09.556896 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.552070 2570 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 18:16:09.556896 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:09.552072 2570 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 18:16:09.556896 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.552079 2570 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 16 18:16:09.556896 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.552932 2570 server.go:962] "Client rotation is on, will bootstrap in background" Apr 16 18:16:09.557294 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.554934 2570 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Apr 16 18:16:09.557294 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.555840 2570 server.go:1019] "Starting client certificate rotation" Apr 16 18:16:09.557294 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.555948 2570 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 16 18:16:09.557294 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.556744 2570 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 16 18:16:09.582829 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.582806 2570 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 16 18:16:09.589098 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.589073 2570 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 16 18:16:09.601699 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.601668 2570 log.go:25] "Validated CRI v1 runtime API" Apr 16 18:16:09.606800 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.606779 2570 log.go:25] "Validated CRI v1 image API" Apr 16 18:16:09.608097 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.608083 2570 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 18:16:09.610609 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.610580 2570 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/nvme0n1p2 ec785929-b351-407c-90cd-9d6000c0dfd2:/dev/nvme0n1p3 faed3412-93f1-4db4-acbb-74aa4d797261:/dev/nvme0n1p4] Apr 16 18:16:09.610609 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.610603 2570 fs.go:136] Filesystem partitions: map[/dev/nvme0n1p3:{mountpoint:/boot major:259 minor:3 fsType:ext4 blockSize:0} /dev/nvme0n1p4:{mountpoint:/var major:259 minor:4 fsType:xfs blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Apr 16 18:16:09.612584 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.612562 2570 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 16 18:16:09.616660 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.616546 2570 manager.go:217] Machine: {Timestamp:2026-04-16 18:16:09.614611037 +0000 UTC m=+0.420363829 CPUVendorID:GenuineIntel NumCores:8 NumPhysicalCores:4 NumSockets:1 CpuFrequency:3104009 MemoryCapacity:33164492800 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ec2312c3b8b43410a5c7fcf2f8c204c1 SystemUUID:ec2312c3-b8b4-3410-a5c7-fcf2f8c204c1 BootID:b8e7af62-453a-42a3-8f48-2a2ea5d2a899 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16582246400 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/nvme0n1p3 DeviceMajor:259 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16582246400 Type:vfs Inodes:4048400 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6632898560 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/nvme0n1p4 DeviceMajor:259 DeviceMinor:4 Capacity:128243970048 Type:vfs Inodes:62651840 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6098944 Type:vfs Inodes:18446744073709551615 HasInodes:true}] DiskMap:map[259:0:{Name:nvme0n1 Major:259 Minor:0 Size:128849018880 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:02:e0:38:5c:48:59 Speed:0 Mtu:9001} {Name:ens5 MacAddress:02:e0:38:5c:48:59 Speed:0 Mtu:9001} {Name:ovs-system MacAddress:02:0b:63:7a:7d:a2 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33164492800 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0 4] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:1 Threads:[1 5] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:2 Threads:[2 6] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:3 Threads:[3 7] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:}] Caches:[{Id:0 Size:37486592 Type:Unified Level:3}] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Apr 16 18:16:09.616660 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.616653 2570 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Apr 16 18:16:09.616769 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.616733 2570 manager.go:233] Version: {KernelVersion:5.14.0-570.104.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20260401-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Apr 16 18:16:09.619333 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.619312 2570 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 18:16:09.619486 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.619336 2570 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-10-0-141-219.ec2.internal","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 18:16:09.619532 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.619496 2570 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 18:16:09.619532 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.619505 2570 container_manager_linux.go:306] "Creating device plugin manager" Apr 16 18:16:09.619532 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.619517 2570 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 16 18:16:09.620281 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.620271 2570 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 16 18:16:09.621244 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.621223 2570 state_mem.go:36] "Initialized new in-memory state store" Apr 16 18:16:09.621351 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.621342 2570 server.go:1267] "Using root directory" path="/var/lib/kubelet" Apr 16 18:16:09.623822 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.623813 2570 kubelet.go:491] "Attempting to sync node with API server" Apr 16 18:16:09.623861 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.623825 2570 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 18:16:09.623861 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.623837 2570 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Apr 16 18:16:09.623861 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.623847 2570 kubelet.go:397] "Adding apiserver pod source" Apr 16 18:16:09.623861 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.623855 2570 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 18:16:09.625373 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.625361 2570 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 16 18:16:09.625420 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.625382 2570 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 16 18:16:09.629624 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.629603 2570 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.10-2.rhaos4.20.gita4d0894.el9" apiVersion="v1" Apr 16 18:16:09.631643 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.631626 2570 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 18:16:09.633044 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.633030 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Apr 16 18:16:09.633093 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.633049 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Apr 16 18:16:09.633093 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.633055 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Apr 16 18:16:09.633093 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.633061 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Apr 16 18:16:09.633093 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.633067 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Apr 16 18:16:09.633093 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.633073 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Apr 16 18:16:09.633093 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.633079 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Apr 16 18:16:09.633093 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.633084 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Apr 16 18:16:09.633093 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.633091 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Apr 16 18:16:09.633093 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.633098 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Apr 16 18:16:09.633369 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.633106 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Apr 16 18:16:09.633369 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.633115 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Apr 16 18:16:09.633911 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.633901 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Apr 16 18:16:09.633948 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.633912 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Apr 16 18:16:09.635974 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:09.635953 2570 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 18:16:09.636149 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:09.636126 2570 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-141-219.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 18:16:09.636195 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.636156 2570 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-141-219.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 16 18:16:09.637662 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.637648 2570 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 16 18:16:09.637747 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.637683 2570 server.go:1295] "Started kubelet" Apr 16 18:16:09.637798 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.637780 2570 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 18:16:09.637838 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.637772 2570 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 18:16:09.637838 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.637835 2570 server_v1.go:47] "podresources" method="list" useActivePods=true Apr 16 18:16:09.638376 ip-10-0-141-219 systemd[1]: Started Kubernetes Kubelet. Apr 16 18:16:09.638977 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.638949 2570 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 18:16:09.641244 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.641216 2570 server.go:317] "Adding debug handlers to kubelet server" Apr 16 18:16:09.645737 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.645720 2570 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 18:16:09.645840 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.645737 2570 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Apr 16 18:16:09.646356 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.646341 2570 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 16 18:16:09.646356 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.646343 2570 volume_manager.go:295] "The desired_state_of_world populator starts" Apr 16 18:16:09.646462 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.646363 2570 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 16 18:16:09.646492 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.646471 2570 reconstruct.go:97] "Volume reconstruction finished" Apr 16 18:16:09.646492 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.646481 2570 reconciler.go:26] "Reconciler: start to sync state" Apr 16 18:16:09.646643 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:09.646624 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-141-219.ec2.internal\" not found" Apr 16 18:16:09.646737 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.646718 2570 factory.go:55] Registering systemd factory Apr 16 18:16:09.646810 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.646775 2570 factory.go:223] Registration of the systemd container factory successfully Apr 16 18:16:09.647002 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.646989 2570 factory.go:153] Registering CRI-O factory Apr 16 18:16:09.647050 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.647006 2570 factory.go:223] Registration of the crio container factory successfully Apr 16 18:16:09.647130 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.647104 2570 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Apr 16 18:16:09.647171 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.647141 2570 factory.go:103] Registering Raw factory Apr 16 18:16:09.647171 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.647156 2570 manager.go:1196] Started watching for new ooms in manager Apr 16 18:16:09.647434 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:09.647411 2570 kubelet.go:1618] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Apr 16 18:16:09.647864 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.647844 2570 manager.go:319] Starting recovery of all containers Apr 16 18:16:09.650995 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:09.650969 2570 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-141-219.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Apr 16 18:16:09.651091 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:09.651054 2570 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 18:16:09.651917 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:09.651023 2570 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-141-219.ec2.internal.18a6e912b744c5fa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-141-219.ec2.internal,UID:ip-10-0-141-219.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-10-0-141-219.ec2.internal,},FirstTimestamp:2026-04-16 18:16:09.637660154 +0000 UTC m=+0.443412947,LastTimestamp:2026-04-16 18:16:09.637660154 +0000 UTC m=+0.443412947,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-141-219.ec2.internal,}" Apr 16 18:16:09.659328 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.659309 2570 manager.go:324] Recovery completed Apr 16 18:16:09.663400 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.663380 2570 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-b2n7r" Apr 16 18:16:09.664323 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.664311 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 18:16:09.666381 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.666362 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-219.ec2.internal" event="NodeHasSufficientMemory" Apr 16 18:16:09.666468 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.666397 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-219.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 18:16:09.666468 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.666412 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-219.ec2.internal" event="NodeHasSufficientPID" Apr 16 18:16:09.666922 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.666910 2570 cpu_manager.go:222] "Starting CPU manager" policy="none" Apr 16 18:16:09.666976 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.666922 2570 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Apr 16 18:16:09.666976 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.666940 2570 state_mem.go:36] "Initialized new in-memory state store" Apr 16 18:16:09.668388 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:09.668328 2570 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-141-219.ec2.internal.18a6e912b8fb040a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-141-219.ec2.internal,UID:ip-10-0-141-219.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-141-219.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-141-219.ec2.internal,},FirstTimestamp:2026-04-16 18:16:09.66638081 +0000 UTC m=+0.472133603,LastTimestamp:2026-04-16 18:16:09.66638081 +0000 UTC m=+0.472133603,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-141-219.ec2.internal,}" Apr 16 18:16:09.669338 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.669324 2570 policy_none.go:49] "None policy: Start" Apr 16 18:16:09.669403 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.669344 2570 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 16 18:16:09.669403 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.669358 2570 state_mem.go:35] "Initializing new in-memory state store" Apr 16 18:16:09.670569 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.670551 2570 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-b2n7r" Apr 16 18:16:09.712107 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.706753 2570 manager.go:341] "Starting Device Plugin manager" Apr 16 18:16:09.712107 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:09.706897 2570 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 18:16:09.712107 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.706908 2570 server.go:85] "Starting device plugin registration server" Apr 16 18:16:09.712107 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.707098 2570 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 18:16:09.712107 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.707108 2570 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 18:16:09.712107 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.707218 2570 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Apr 16 18:16:09.712107 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.707328 2570 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Apr 16 18:16:09.712107 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.707339 2570 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 18:16:09.712107 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:09.707817 2570 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Apr 16 18:16:09.712107 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:09.707862 2570 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-141-219.ec2.internal\" not found" Apr 16 18:16:09.775622 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.775578 2570 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 16 18:16:09.776747 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.776723 2570 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 16 18:16:09.776747 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.776746 2570 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 16 18:16:09.776906 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.776763 2570 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 18:16:09.776906 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.776770 2570 kubelet.go:2451] "Starting kubelet main sync loop" Apr 16 18:16:09.776906 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:09.776798 2570 kubelet.go:2475] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Apr 16 18:16:09.779185 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.779163 2570 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 18:16:09.807648 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.807598 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 18:16:09.808426 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.808412 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-219.ec2.internal" event="NodeHasSufficientMemory" Apr 16 18:16:09.808499 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.808442 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-219.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 18:16:09.808499 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.808464 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-219.ec2.internal" event="NodeHasSufficientPID" Apr 16 18:16:09.808499 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.808493 2570 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-141-219.ec2.internal" Apr 16 18:16:09.817006 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.816993 2570 kubelet_node_status.go:81] "Successfully registered node" node="ip-10-0-141-219.ec2.internal" Apr 16 18:16:09.817056 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:09.817013 2570 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-141-219.ec2.internal\": node \"ip-10-0-141-219.ec2.internal\" not found" Apr 16 18:16:09.834817 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:09.834795 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-141-219.ec2.internal\" not found" Apr 16 18:16:09.877138 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.877113 2570 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-141-219.ec2.internal","kube-system/kube-apiserver-proxy-ip-10-0-141-219.ec2.internal"] Apr 16 18:16:09.877217 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.877187 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 18:16:09.878774 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.878758 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-219.ec2.internal" event="NodeHasSufficientMemory" Apr 16 18:16:09.878861 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.878792 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-219.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 18:16:09.878861 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.878805 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-219.ec2.internal" event="NodeHasSufficientPID" Apr 16 18:16:09.880116 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.880100 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 18:16:09.880259 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.880243 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-141-219.ec2.internal" Apr 16 18:16:09.880308 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.880274 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 18:16:09.880736 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.880720 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-219.ec2.internal" event="NodeHasSufficientMemory" Apr 16 18:16:09.880809 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.880752 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-219.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 18:16:09.880809 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.880767 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-219.ec2.internal" event="NodeHasSufficientPID" Apr 16 18:16:09.880912 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.880841 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-219.ec2.internal" event="NodeHasSufficientMemory" Apr 16 18:16:09.880912 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.880870 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-219.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 18:16:09.880912 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.880884 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-219.ec2.internal" event="NodeHasSufficientPID" Apr 16 18:16:09.882490 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.882475 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-141-219.ec2.internal" Apr 16 18:16:09.882568 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.882504 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 18:16:09.883158 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.883143 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-219.ec2.internal" event="NodeHasSufficientMemory" Apr 16 18:16:09.883255 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.883171 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-219.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 18:16:09.883255 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.883184 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-219.ec2.internal" event="NodeHasSufficientPID" Apr 16 18:16:09.901786 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:09.901761 2570 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-141-219.ec2.internal\" not found" node="ip-10-0-141-219.ec2.internal" Apr 16 18:16:09.906053 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:09.906037 2570 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-141-219.ec2.internal\" not found" node="ip-10-0-141-219.ec2.internal" Apr 16 18:16:09.935879 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:09.935862 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-141-219.ec2.internal\" not found" Apr 16 18:16:09.947980 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.947963 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/4f9f135ae9937d0daa3eb597d8fe2521-config\") pod \"kube-apiserver-proxy-ip-10-0-141-219.ec2.internal\" (UID: \"4f9f135ae9937d0daa3eb597d8fe2521\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-141-219.ec2.internal" Apr 16 18:16:09.948051 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.947988 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/7f66aba8a476534f5fbd8eab5bc63db2-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-141-219.ec2.internal\" (UID: \"7f66aba8a476534f5fbd8eab5bc63db2\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-141-219.ec2.internal" Apr 16 18:16:09.948051 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:09.948006 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7f66aba8a476534f5fbd8eab5bc63db2-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-141-219.ec2.internal\" (UID: \"7f66aba8a476534f5fbd8eab5bc63db2\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-141-219.ec2.internal" Apr 16 18:16:10.036837 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:10.036810 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-141-219.ec2.internal\" not found" Apr 16 18:16:10.048197 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:10.048175 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7f66aba8a476534f5fbd8eab5bc63db2-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-141-219.ec2.internal\" (UID: \"7f66aba8a476534f5fbd8eab5bc63db2\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-141-219.ec2.internal" Apr 16 18:16:10.048330 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:10.048208 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/4f9f135ae9937d0daa3eb597d8fe2521-config\") pod \"kube-apiserver-proxy-ip-10-0-141-219.ec2.internal\" (UID: \"4f9f135ae9937d0daa3eb597d8fe2521\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-141-219.ec2.internal" Apr 16 18:16:10.048330 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:10.048254 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/7f66aba8a476534f5fbd8eab5bc63db2-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-141-219.ec2.internal\" (UID: \"7f66aba8a476534f5fbd8eab5bc63db2\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-141-219.ec2.internal" Apr 16 18:16:10.048330 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:10.048270 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7f66aba8a476534f5fbd8eab5bc63db2-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-141-219.ec2.internal\" (UID: \"7f66aba8a476534f5fbd8eab5bc63db2\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-141-219.ec2.internal" Apr 16 18:16:10.048330 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:10.048279 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/4f9f135ae9937d0daa3eb597d8fe2521-config\") pod \"kube-apiserver-proxy-ip-10-0-141-219.ec2.internal\" (UID: \"4f9f135ae9937d0daa3eb597d8fe2521\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-141-219.ec2.internal" Apr 16 18:16:10.048330 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:10.048295 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/7f66aba8a476534f5fbd8eab5bc63db2-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-141-219.ec2.internal\" (UID: \"7f66aba8a476534f5fbd8eab5bc63db2\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-141-219.ec2.internal" Apr 16 18:16:10.137612 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:10.137537 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-141-219.ec2.internal\" not found" Apr 16 18:16:10.206052 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:10.206017 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-141-219.ec2.internal" Apr 16 18:16:10.208638 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:10.208623 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-141-219.ec2.internal" Apr 16 18:16:10.238671 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:10.238642 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-141-219.ec2.internal\" not found" Apr 16 18:16:10.339116 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:10.339084 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-141-219.ec2.internal\" not found" Apr 16 18:16:10.439677 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:10.439613 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-141-219.ec2.internal\" not found" Apr 16 18:16:10.540243 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:10.540205 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-141-219.ec2.internal\" not found" Apr 16 18:16:10.556676 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:10.556655 2570 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Apr 16 18:16:10.556859 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:10.556833 2570 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Apr 16 18:16:10.640918 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:10.640900 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-141-219.ec2.internal\" not found" Apr 16 18:16:10.646786 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:10.646763 2570 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Apr 16 18:16:10.647486 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:10.647455 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f9f135ae9937d0daa3eb597d8fe2521.slice/crio-4d1f5a0926a100ea53f823c257c4fae3ce7157e9f067eb1b1385689d18e3cc17 WatchSource:0}: Error finding container 4d1f5a0926a100ea53f823c257c4fae3ce7157e9f067eb1b1385689d18e3cc17: Status 404 returned error can't find the container with id 4d1f5a0926a100ea53f823c257c4fae3ce7157e9f067eb1b1385689d18e3cc17 Apr 16 18:16:10.647947 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:10.647921 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f66aba8a476534f5fbd8eab5bc63db2.slice/crio-5a8d220aa1562d7e0eed4adde54adb48648d6c28bee06cc2fd7c7035959b341d WatchSource:0}: Error finding container 5a8d220aa1562d7e0eed4adde54adb48648d6c28bee06cc2fd7c7035959b341d: Status 404 returned error can't find the container with id 5a8d220aa1562d7e0eed4adde54adb48648d6c28bee06cc2fd7c7035959b341d Apr 16 18:16:10.652511 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:10.652497 2570 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 18:16:10.660076 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:10.660055 2570 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 16 18:16:10.671917 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:10.671874 2570 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2028-04-15 18:11:09 +0000 UTC" deadline="2028-01-16 04:03:20.322526672 +0000 UTC" Apr 16 18:16:10.671917 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:10.671912 2570 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="15345h47m9.650618222s" Apr 16 18:16:10.721910 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:10.721852 2570 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-57kpj" Apr 16 18:16:10.730875 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:10.730860 2570 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-57kpj" Apr 16 18:16:10.739976 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:10.739954 2570 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 18:16:10.741781 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:10.741767 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-141-219.ec2.internal\" not found" Apr 16 18:16:10.780099 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:10.780058 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-141-219.ec2.internal" event={"ID":"7f66aba8a476534f5fbd8eab5bc63db2","Type":"ContainerStarted","Data":"5a8d220aa1562d7e0eed4adde54adb48648d6c28bee06cc2fd7c7035959b341d"} Apr 16 18:16:10.780985 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:10.780962 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-141-219.ec2.internal" event={"ID":"4f9f135ae9937d0daa3eb597d8fe2521","Type":"ContainerStarted","Data":"4d1f5a0926a100ea53f823c257c4fae3ce7157e9f067eb1b1385689d18e3cc17"} Apr 16 18:16:10.842175 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:10.842148 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-141-219.ec2.internal\" not found" Apr 16 18:16:10.942702 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:10.942675 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-141-219.ec2.internal\" not found" Apr 16 18:16:11.031908 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.031696 2570 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 18:16:11.046882 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.046861 2570 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-141-219.ec2.internal" Apr 16 18:16:11.058887 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.058859 2570 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 16 18:16:11.060752 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.060731 2570 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-141-219.ec2.internal" Apr 16 18:16:11.072816 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.072794 2570 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 16 18:16:11.142624 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.142570 2570 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 18:16:11.625346 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.625261 2570 apiserver.go:52] "Watching apiserver" Apr 16 18:16:11.632068 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.632044 2570 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Apr 16 18:16:11.633943 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.633917 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-5jc6x","kube-system/konnectivity-agent-jkbvg","kube-system/kube-apiserver-proxy-ip-10-0-141-219.ec2.internal","openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8bzs6","openshift-cluster-node-tuning-operator/tuned-5z2wk","openshift-multus/multus-65f6j","openshift-multus/multus-additional-cni-plugins-lxx4t","openshift-ovn-kubernetes/ovnkube-node-hls95","openshift-image-registry/node-ca-d4t7h","openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-141-219.ec2.internal","openshift-multus/network-metrics-daemon-hgcdt","openshift-network-diagnostics/network-check-target-fzg6h"] Apr 16 18:16:11.636783 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.636737 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jc6x" Apr 16 18:16:11.638257 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.637924 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.638257 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.638048 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.639112 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.639092 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Apr 16 18:16:11.639196 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.639154 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-dockercfg-hnjqb\"" Apr 16 18:16:11.639196 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.639184 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Apr 16 18:16:11.639318 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.639199 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Apr 16 18:16:11.639578 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.639558 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.639994 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.639957 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"tuned-dockercfg-qw4f9\"" Apr 16 18:16:11.640102 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.639989 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Apr 16 18:16:11.640198 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.640162 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"openshift-service-ca.crt\"" Apr 16 18:16:11.640314 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.640301 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Apr 16 18:16:11.640314 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.640309 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-qfbmz\"" Apr 16 18:16:11.640419 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.640328 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Apr 16 18:16:11.640468 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.640422 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Apr 16 18:16:11.641015 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.640971 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hgcdt" Apr 16 18:16:11.641120 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:11.641075 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hgcdt" podUID="5af0e6ec-389a-47dd-afc0-725b505e4635" Apr 16 18:16:11.641865 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.641678 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"kube-root-ca.crt\"" Apr 16 18:16:11.642271 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.642001 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Apr 16 18:16:11.642271 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.642012 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Apr 16 18:16:11.642271 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.642141 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Apr 16 18:16:11.642768 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.642744 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-hvk9x\"" Apr 16 18:16:11.642854 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.642774 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Apr 16 18:16:11.642854 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.642796 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Apr 16 18:16:11.642854 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.642746 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Apr 16 18:16:11.643877 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.643860 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fzg6h" Apr 16 18:16:11.643990 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:11.643970 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fzg6h" podUID="6d72f360-ffda-4447-8b43-c1059ff81bf3" Apr 16 18:16:11.645217 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.645197 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-jkbvg" Apr 16 18:16:11.647331 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.647272 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"konnectivity-agent\"" Apr 16 18:16:11.647415 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.647393 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kube-system\"/\"konnectivity-ca-bundle\"" Apr 16 18:16:11.647415 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.647409 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"default-dockercfg-wzzrc\"" Apr 16 18:16:11.650898 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.649512 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8bzs6" Apr 16 18:16:11.651780 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.651543 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"openshift-service-ca.crt\"" Apr 16 18:16:11.651780 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.651550 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-metrics-serving-cert\"" Apr 16 18:16:11.651903 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.651783 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"kube-root-ca.crt\"" Apr 16 18:16:11.652298 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.652276 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-sa-dockercfg-zdnjn\"" Apr 16 18:16:11.652298 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.652291 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lxx4t" Apr 16 18:16:11.652446 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.652296 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-d4t7h" Apr 16 18:16:11.654561 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.654543 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Apr 16 18:16:11.654820 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.654544 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Apr 16 18:16:11.654820 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.654543 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Apr 16 18:16:11.654820 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.654797 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-w7mkd\"" Apr 16 18:16:11.655042 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.654571 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-kgn4n\"" Apr 16 18:16:11.655042 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.654801 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Apr 16 18:16:11.655042 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.654562 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Apr 16 18:16:11.655042 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.655008 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-run-systemd\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.655042 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.655037 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/259c30de-27f1-414c-b384-b90b6e241cd8-os-release\") pod \"multus-additional-cni-plugins-lxx4t\" (UID: \"259c30de-27f1-414c-b384-b90b6e241cd8\") " pod="openshift-multus/multus-additional-cni-plugins-lxx4t" Apr 16 18:16:11.655319 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.655062 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/259c30de-27f1-414c-b384-b90b6e241cd8-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-lxx4t\" (UID: \"259c30de-27f1-414c-b384-b90b6e241cd8\") " pod="openshift-multus/multus-additional-cni-plugins-lxx4t" Apr 16 18:16:11.655319 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.655086 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/1abad187-e043-49a6-9671-1535ec064d28-sys-fs\") pod \"aws-ebs-csi-driver-node-8bzs6\" (UID: \"1abad187-e043-49a6-9671-1535ec064d28\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8bzs6" Apr 16 18:16:11.655319 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.655108 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/29f69f9c-834d-4ff7-92ac-005e00d0651c-etc-kubernetes\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.655319 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.655131 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-cnibin\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.655319 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.655153 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-host-run-netns\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.655319 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.655206 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-log-socket\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.655319 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.655259 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/259c30de-27f1-414c-b384-b90b6e241cd8-cni-binary-copy\") pod \"multus-additional-cni-plugins-lxx4t\" (UID: \"259c30de-27f1-414c-b384-b90b6e241cd8\") " pod="openshift-multus/multus-additional-cni-plugins-lxx4t" Apr 16 18:16:11.655319 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.655289 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/b0897e32-576e-42ee-a9c4-bf56f480aba0-agent-certs\") pod \"konnectivity-agent-jkbvg\" (UID: \"b0897e32-576e-42ee-a9c4-bf56f480aba0\") " pod="kube-system/konnectivity-agent-jkbvg" Apr 16 18:16:11.655748 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.655327 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/29f69f9c-834d-4ff7-92ac-005e00d0651c-etc-sysconfig\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.655748 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.655372 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb8gp\" (UniqueName: \"kubernetes.io/projected/f8eeffdd-37a1-4898-94ea-20c490313c34-kube-api-access-lb8gp\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.655748 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.655401 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/b0897e32-576e-42ee-a9c4-bf56f480aba0-konnectivity-ca\") pod \"konnectivity-agent-jkbvg\" (UID: \"b0897e32-576e-42ee-a9c4-bf56f480aba0\") " pod="kube-system/konnectivity-agent-jkbvg" Apr 16 18:16:11.655748 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.655449 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/1abad187-e043-49a6-9671-1535ec064d28-etc-selinux\") pod \"aws-ebs-csi-driver-node-8bzs6\" (UID: \"1abad187-e043-49a6-9671-1535ec064d28\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8bzs6" Apr 16 18:16:11.655748 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.655474 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/29f69f9c-834d-4ff7-92ac-005e00d0651c-etc-sysctl-conf\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.655748 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.655496 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/29f69f9c-834d-4ff7-92ac-005e00d0651c-sys\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.655748 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.655519 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1b14fadb-4a71-439d-84de-91c5c3e29811-host-slash\") pod \"iptables-alerter-5jc6x\" (UID: \"1b14fadb-4a71-439d-84de-91c5c3e29811\") " pod="openshift-network-operator/iptables-alerter-5jc6x" Apr 16 18:16:11.655748 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.655567 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-systemd-units\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.655748 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.655595 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/29f69f9c-834d-4ff7-92ac-005e00d0651c-etc-systemd\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.655748 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.655617 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/29f69f9c-834d-4ff7-92ac-005e00d0651c-etc-tuned\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.655748 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.655643 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-host-var-lib-cni-bin\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.655748 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.655667 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9eda7e8d-1d99-41d3-acfb-b6c80829811c-ovn-node-metrics-cert\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.655748 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.655719 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96lzw\" (UniqueName: \"kubernetes.io/projected/6d72f360-ffda-4447-8b43-c1059ff81bf3-kube-api-access-96lzw\") pod \"network-check-target-fzg6h\" (UID: \"6d72f360-ffda-4447-8b43-c1059ff81bf3\") " pod="openshift-network-diagnostics/network-check-target-fzg6h" Apr 16 18:16:11.655748 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.655743 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfffv\" (UniqueName: \"kubernetes.io/projected/1abad187-e043-49a6-9671-1535ec064d28-kube-api-access-mfffv\") pod \"aws-ebs-csi-driver-node-8bzs6\" (UID: \"1abad187-e043-49a6-9671-1535ec064d28\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8bzs6" Apr 16 18:16:11.656307 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.655765 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29f69f9c-834d-4ff7-92ac-005e00d0651c-lib-modules\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.656307 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.655788 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-host-run-k8s-cni-cncf-io\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.656307 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.655841 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-host-var-lib-kubelet\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.656307 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.655876 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-host-run-multus-certs\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.656307 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.655914 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-node-log\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.656307 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.655940 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/259c30de-27f1-414c-b384-b90b6e241cd8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lxx4t\" (UID: \"259c30de-27f1-414c-b384-b90b6e241cd8\") " pod="openshift-multus/multus-additional-cni-plugins-lxx4t" Apr 16 18:16:11.656307 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.655967 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-multus-cni-dir\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.656307 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.655994 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-host-run-netns\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.656307 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656011 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-run-ovn\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.656307 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656026 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-host-run-ovn-kubernetes\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.656307 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656050 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.656307 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656075 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5af0e6ec-389a-47dd-afc0-725b505e4635-metrics-certs\") pod \"network-metrics-daemon-hgcdt\" (UID: \"5af0e6ec-389a-47dd-afc0-725b505e4635\") " pod="openshift-multus/network-metrics-daemon-hgcdt" Apr 16 18:16:11.656307 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656091 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1abad187-e043-49a6-9671-1535ec064d28-socket-dir\") pod \"aws-ebs-csi-driver-node-8bzs6\" (UID: \"1abad187-e043-49a6-9671-1535ec064d28\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8bzs6" Apr 16 18:16:11.656307 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656113 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-os-release\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.656307 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656136 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-host-var-lib-cni-multus\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.656307 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656155 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-etc-openvswitch\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.657038 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656171 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9eda7e8d-1d99-41d3-acfb-b6c80829811c-ovnkube-config\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.657038 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656194 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/1b14fadb-4a71-439d-84de-91c5c3e29811-iptables-alerter-script\") pod \"iptables-alerter-5jc6x\" (UID: \"1b14fadb-4a71-439d-84de-91c5c3e29811\") " pod="openshift-network-operator/iptables-alerter-5jc6x" Apr 16 18:16:11.657038 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656219 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1abad187-e043-49a6-9671-1535ec064d28-registration-dir\") pod \"aws-ebs-csi-driver-node-8bzs6\" (UID: \"1abad187-e043-49a6-9671-1535ec064d28\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8bzs6" Apr 16 18:16:11.657038 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656266 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/29f69f9c-834d-4ff7-92ac-005e00d0651c-run\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.657038 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656306 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9eda7e8d-1d99-41d3-acfb-b6c80829811c-env-overrides\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.657038 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656330 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-system-cni-dir\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.657038 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656353 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f8eeffdd-37a1-4898-94ea-20c490313c34-cni-binary-copy\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.657038 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656375 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-hostroot\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.657038 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656395 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-multus-conf-dir\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.657038 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656419 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1abad187-e043-49a6-9671-1535ec064d28-kubelet-dir\") pod \"aws-ebs-csi-driver-node-8bzs6\" (UID: \"1abad187-e043-49a6-9671-1535ec064d28\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8bzs6" Apr 16 18:16:11.657038 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656452 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-host-kubelet\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.657038 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656475 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-run-openvswitch\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.657038 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656498 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldln7\" (UniqueName: \"kubernetes.io/projected/5af0e6ec-389a-47dd-afc0-725b505e4635-kube-api-access-ldln7\") pod \"network-metrics-daemon-hgcdt\" (UID: \"5af0e6ec-389a-47dd-afc0-725b505e4635\") " pod="openshift-multus/network-metrics-daemon-hgcdt" Apr 16 18:16:11.657038 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656532 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29f69f9c-834d-4ff7-92ac-005e00d0651c-tmp\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.657038 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656575 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f8eeffdd-37a1-4898-94ea-20c490313c34-multus-daemon-config\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.657038 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656608 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-etc-kubernetes\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.657038 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656634 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-host-slash\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.657674 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656661 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9eda7e8d-1d99-41d3-acfb-b6c80829811c-ovnkube-script-lib\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.657674 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656686 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/259c30de-27f1-414c-b384-b90b6e241cd8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lxx4t\" (UID: \"259c30de-27f1-414c-b384-b90b6e241cd8\") " pod="openshift-multus/multus-additional-cni-plugins-lxx4t" Apr 16 18:16:11.657674 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656728 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4wd4\" (UniqueName: \"kubernetes.io/projected/259c30de-27f1-414c-b384-b90b6e241cd8-kube-api-access-d4wd4\") pod \"multus-additional-cni-plugins-lxx4t\" (UID: \"259c30de-27f1-414c-b384-b90b6e241cd8\") " pod="openshift-multus/multus-additional-cni-plugins-lxx4t" Apr 16 18:16:11.657674 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656752 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/29f69f9c-834d-4ff7-92ac-005e00d0651c-host\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.657674 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656771 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-multus-socket-dir-parent\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.657674 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656795 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/259c30de-27f1-414c-b384-b90b6e241cd8-system-cni-dir\") pod \"multus-additional-cni-plugins-lxx4t\" (UID: \"259c30de-27f1-414c-b384-b90b6e241cd8\") " pod="openshift-multus/multus-additional-cni-plugins-lxx4t" Apr 16 18:16:11.657674 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656817 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/259c30de-27f1-414c-b384-b90b6e241cd8-cnibin\") pod \"multus-additional-cni-plugins-lxx4t\" (UID: \"259c30de-27f1-414c-b384-b90b6e241cd8\") " pod="openshift-multus/multus-additional-cni-plugins-lxx4t" Apr 16 18:16:11.657674 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656841 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjnsj\" (UniqueName: \"kubernetes.io/projected/1b14fadb-4a71-439d-84de-91c5c3e29811-kube-api-access-fjnsj\") pod \"iptables-alerter-5jc6x\" (UID: \"1b14fadb-4a71-439d-84de-91c5c3e29811\") " pod="openshift-network-operator/iptables-alerter-5jc6x" Apr 16 18:16:11.657674 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656862 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/1abad187-e043-49a6-9671-1535ec064d28-device-dir\") pod \"aws-ebs-csi-driver-node-8bzs6\" (UID: \"1abad187-e043-49a6-9671-1535ec064d28\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8bzs6" Apr 16 18:16:11.657674 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656884 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/29f69f9c-834d-4ff7-92ac-005e00d0651c-etc-modprobe-d\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.657674 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656914 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/29f69f9c-834d-4ff7-92ac-005e00d0651c-etc-sysctl-d\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.657674 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656945 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/29f69f9c-834d-4ff7-92ac-005e00d0651c-var-lib-kubelet\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.657674 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656972 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9c54\" (UniqueName: \"kubernetes.io/projected/29f69f9c-834d-4ff7-92ac-005e00d0651c-kube-api-access-x9c54\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.657674 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.656995 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-var-lib-openvswitch\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.657674 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.657017 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-host-cni-bin\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.657674 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.657040 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-host-cni-netd\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.658247 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.657062 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8kzb\" (UniqueName: \"kubernetes.io/projected/9eda7e8d-1d99-41d3-acfb-b6c80829811c-kube-api-access-x8kzb\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.732383 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.732346 2570 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-15 18:11:10 +0000 UTC" deadline="2027-12-07 14:59:19.259475841 +0000 UTC" Apr 16 18:16:11.732383 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.732377 2570 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="14396h43m7.527102315s" Apr 16 18:16:11.747413 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.747386 2570 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 16 18:16:11.758041 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758002 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/29f69f9c-834d-4ff7-92ac-005e00d0651c-sys\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.758185 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758058 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1b14fadb-4a71-439d-84de-91c5c3e29811-host-slash\") pod \"iptables-alerter-5jc6x\" (UID: \"1b14fadb-4a71-439d-84de-91c5c3e29811\") " pod="openshift-network-operator/iptables-alerter-5jc6x" Apr 16 18:16:11.758185 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758085 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-systemd-units\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.758185 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758111 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fbc7bbf3-3e05-4fdd-ad20-de2ed4f13e8d-host\") pod \"node-ca-d4t7h\" (UID: \"fbc7bbf3-3e05-4fdd-ad20-de2ed4f13e8d\") " pod="openshift-image-registry/node-ca-d4t7h" Apr 16 18:16:11.758185 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758124 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/29f69f9c-834d-4ff7-92ac-005e00d0651c-sys\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.758185 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758134 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/29f69f9c-834d-4ff7-92ac-005e00d0651c-etc-systemd\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.758185 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758157 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/29f69f9c-834d-4ff7-92ac-005e00d0651c-etc-tuned\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.758185 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758176 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-systemd-units\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.758185 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758182 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-host-var-lib-cni-bin\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.758548 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758221 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1b14fadb-4a71-439d-84de-91c5c3e29811-host-slash\") pod \"iptables-alerter-5jc6x\" (UID: \"1b14fadb-4a71-439d-84de-91c5c3e29811\") " pod="openshift-network-operator/iptables-alerter-5jc6x" Apr 16 18:16:11.758548 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758221 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9eda7e8d-1d99-41d3-acfb-b6c80829811c-ovn-node-metrics-cert\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.758548 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758269 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-96lzw\" (UniqueName: \"kubernetes.io/projected/6d72f360-ffda-4447-8b43-c1059ff81bf3-kube-api-access-96lzw\") pod \"network-check-target-fzg6h\" (UID: \"6d72f360-ffda-4447-8b43-c1059ff81bf3\") " pod="openshift-network-diagnostics/network-check-target-fzg6h" Apr 16 18:16:11.758548 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758330 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mfffv\" (UniqueName: \"kubernetes.io/projected/1abad187-e043-49a6-9671-1535ec064d28-kube-api-access-mfffv\") pod \"aws-ebs-csi-driver-node-8bzs6\" (UID: \"1abad187-e043-49a6-9671-1535ec064d28\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8bzs6" Apr 16 18:16:11.758548 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758354 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29f69f9c-834d-4ff7-92ac-005e00d0651c-lib-modules\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.758548 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758383 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-host-run-k8s-cni-cncf-io\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.758548 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758389 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/29f69f9c-834d-4ff7-92ac-005e00d0651c-etc-systemd\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.758548 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758409 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-host-var-lib-kubelet\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.758548 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758425 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-host-var-lib-cni-bin\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.758548 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758469 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-host-var-lib-kubelet\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.758548 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758435 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-host-run-multus-certs\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.758548 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758485 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-host-run-multus-certs\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.758548 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758504 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-node-log\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.758548 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758539 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29f69f9c-834d-4ff7-92ac-005e00d0651c-lib-modules\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.758548 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758532 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/259c30de-27f1-414c-b384-b90b6e241cd8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lxx4t\" (UID: \"259c30de-27f1-414c-b384-b90b6e241cd8\") " pod="openshift-multus/multus-additional-cni-plugins-lxx4t" Apr 16 18:16:11.759216 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758542 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-host-run-k8s-cni-cncf-io\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.759216 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758579 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-node-log\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.759216 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758590 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-multus-cni-dir\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.759216 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758615 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-host-run-netns\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.759216 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758641 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-run-ovn\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.759216 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758669 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-host-run-ovn-kubernetes\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.759216 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758688 2570 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Apr 16 18:16:11.759216 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758697 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-host-run-netns\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.759216 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758729 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.759216 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758745 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-multus-cni-dir\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.759216 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758744 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-run-ovn\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.759216 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758694 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.759216 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758772 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-host-run-ovn-kubernetes\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.759216 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758802 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5af0e6ec-389a-47dd-afc0-725b505e4635-metrics-certs\") pod \"network-metrics-daemon-hgcdt\" (UID: \"5af0e6ec-389a-47dd-afc0-725b505e4635\") " pod="openshift-multus/network-metrics-daemon-hgcdt" Apr 16 18:16:11.759216 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758831 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1abad187-e043-49a6-9671-1535ec064d28-socket-dir\") pod \"aws-ebs-csi-driver-node-8bzs6\" (UID: \"1abad187-e043-49a6-9671-1535ec064d28\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8bzs6" Apr 16 18:16:11.759216 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758856 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-os-release\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.759216 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758882 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-host-var-lib-cni-multus\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.759216 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758911 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-etc-openvswitch\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.759943 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758935 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9eda7e8d-1d99-41d3-acfb-b6c80829811c-ovnkube-config\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.759943 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:11.758961 2570 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:16:11.759943 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.758961 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/1b14fadb-4a71-439d-84de-91c5c3e29811-iptables-alerter-script\") pod \"iptables-alerter-5jc6x\" (UID: \"1b14fadb-4a71-439d-84de-91c5c3e29811\") " pod="openshift-network-operator/iptables-alerter-5jc6x" Apr 16 18:16:11.759943 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.759004 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1abad187-e043-49a6-9671-1535ec064d28-registration-dir\") pod \"aws-ebs-csi-driver-node-8bzs6\" (UID: \"1abad187-e043-49a6-9671-1535ec064d28\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8bzs6" Apr 16 18:16:11.759943 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.759030 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/29f69f9c-834d-4ff7-92ac-005e00d0651c-run\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.759943 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:11.759057 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5af0e6ec-389a-47dd-afc0-725b505e4635-metrics-certs podName:5af0e6ec-389a-47dd-afc0-725b505e4635 nodeName:}" failed. No retries permitted until 2026-04-16 18:16:12.259026942 +0000 UTC m=+3.064779728 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5af0e6ec-389a-47dd-afc0-725b505e4635-metrics-certs") pod "network-metrics-daemon-hgcdt" (UID: "5af0e6ec-389a-47dd-afc0-725b505e4635") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:16:11.759943 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.759056 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-host-var-lib-cni-multus\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.759943 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.759098 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/29f69f9c-834d-4ff7-92ac-005e00d0651c-run\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.759943 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.759097 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9eda7e8d-1d99-41d3-acfb-b6c80829811c-env-overrides\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.759943 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.759142 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-system-cni-dir\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.759943 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.759149 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-etc-openvswitch\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.759943 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.759161 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1abad187-e043-49a6-9671-1535ec064d28-socket-dir\") pod \"aws-ebs-csi-driver-node-8bzs6\" (UID: \"1abad187-e043-49a6-9671-1535ec064d28\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8bzs6" Apr 16 18:16:11.759943 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.759136 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/259c30de-27f1-414c-b384-b90b6e241cd8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lxx4t\" (UID: \"259c30de-27f1-414c-b384-b90b6e241cd8\") " pod="openshift-multus/multus-additional-cni-plugins-lxx4t" Apr 16 18:16:11.759943 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.759169 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f8eeffdd-37a1-4898-94ea-20c490313c34-cni-binary-copy\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.759943 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.759814 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-hostroot\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.759943 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.759882 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-multus-conf-dir\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.760488 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.759976 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1abad187-e043-49a6-9671-1535ec064d28-kubelet-dir\") pod \"aws-ebs-csi-driver-node-8bzs6\" (UID: \"1abad187-e043-49a6-9671-1535ec064d28\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8bzs6" Apr 16 18:16:11.760488 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.760100 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-multus-conf-dir\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.760488 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.760151 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-host-kubelet\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.760488 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.760192 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-run-openvswitch\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.760488 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.759249 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1abad187-e043-49a6-9671-1535ec064d28-registration-dir\") pod \"aws-ebs-csi-driver-node-8bzs6\" (UID: \"1abad187-e043-49a6-9671-1535ec064d28\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8bzs6" Apr 16 18:16:11.760488 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.760278 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ldln7\" (UniqueName: \"kubernetes.io/projected/5af0e6ec-389a-47dd-afc0-725b505e4635-kube-api-access-ldln7\") pod \"network-metrics-daemon-hgcdt\" (UID: \"5af0e6ec-389a-47dd-afc0-725b505e4635\") " pod="openshift-multus/network-metrics-daemon-hgcdt" Apr 16 18:16:11.760488 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.760190 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9eda7e8d-1d99-41d3-acfb-b6c80829811c-ovnkube-config\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.760488 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.760436 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1abad187-e043-49a6-9671-1535ec064d28-kubelet-dir\") pod \"aws-ebs-csi-driver-node-8bzs6\" (UID: \"1abad187-e043-49a6-9671-1535ec064d28\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8bzs6" Apr 16 18:16:11.760488 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.760478 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-host-kubelet\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.761173 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.761149 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/1b14fadb-4a71-439d-84de-91c5c3e29811-iptables-alerter-script\") pod \"iptables-alerter-5jc6x\" (UID: \"1b14fadb-4a71-439d-84de-91c5c3e29811\") " pod="openshift-network-operator/iptables-alerter-5jc6x" Apr 16 18:16:11.761173 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.761143 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86k95\" (UniqueName: \"kubernetes.io/projected/fbc7bbf3-3e05-4fdd-ad20-de2ed4f13e8d-kube-api-access-86k95\") pod \"node-ca-d4t7h\" (UID: \"fbc7bbf3-3e05-4fdd-ad20-de2ed4f13e8d\") " pod="openshift-image-registry/node-ca-d4t7h" Apr 16 18:16:11.761423 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.761385 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-run-openvswitch\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.761498 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.761378 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f8eeffdd-37a1-4898-94ea-20c490313c34-cni-binary-copy\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.763431 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.761684 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-system-cni-dir\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.763431 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.762192 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9eda7e8d-1d99-41d3-acfb-b6c80829811c-env-overrides\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.763431 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.762335 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-os-release\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.763431 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.762367 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/29f69f9c-834d-4ff7-92ac-005e00d0651c-etc-tuned\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.763431 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.762480 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29f69f9c-834d-4ff7-92ac-005e00d0651c-tmp\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.763431 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.762498 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-hostroot\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.763431 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.762519 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f8eeffdd-37a1-4898-94ea-20c490313c34-multus-daemon-config\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.763431 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.762559 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-etc-kubernetes\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.763431 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.762670 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-host-slash\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.763431 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.762704 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9eda7e8d-1d99-41d3-acfb-b6c80829811c-ovnkube-script-lib\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.763431 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.762735 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/259c30de-27f1-414c-b384-b90b6e241cd8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lxx4t\" (UID: \"259c30de-27f1-414c-b384-b90b6e241cd8\") " pod="openshift-multus/multus-additional-cni-plugins-lxx4t" Apr 16 18:16:11.763431 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.762804 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d4wd4\" (UniqueName: \"kubernetes.io/projected/259c30de-27f1-414c-b384-b90b6e241cd8-kube-api-access-d4wd4\") pod \"multus-additional-cni-plugins-lxx4t\" (UID: \"259c30de-27f1-414c-b384-b90b6e241cd8\") " pod="openshift-multus/multus-additional-cni-plugins-lxx4t" Apr 16 18:16:11.763431 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.762872 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/29f69f9c-834d-4ff7-92ac-005e00d0651c-host\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.763431 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.762916 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-multus-socket-dir-parent\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.763431 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.762947 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/259c30de-27f1-414c-b384-b90b6e241cd8-system-cni-dir\") pod \"multus-additional-cni-plugins-lxx4t\" (UID: \"259c30de-27f1-414c-b384-b90b6e241cd8\") " pod="openshift-multus/multus-additional-cni-plugins-lxx4t" Apr 16 18:16:11.763431 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.762977 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/259c30de-27f1-414c-b384-b90b6e241cd8-cnibin\") pod \"multus-additional-cni-plugins-lxx4t\" (UID: \"259c30de-27f1-414c-b384-b90b6e241cd8\") " pod="openshift-multus/multus-additional-cni-plugins-lxx4t" Apr 16 18:16:11.763431 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.763039 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fjnsj\" (UniqueName: \"kubernetes.io/projected/1b14fadb-4a71-439d-84de-91c5c3e29811-kube-api-access-fjnsj\") pod \"iptables-alerter-5jc6x\" (UID: \"1b14fadb-4a71-439d-84de-91c5c3e29811\") " pod="openshift-network-operator/iptables-alerter-5jc6x" Apr 16 18:16:11.764321 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.763081 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/1abad187-e043-49a6-9671-1535ec064d28-device-dir\") pod \"aws-ebs-csi-driver-node-8bzs6\" (UID: \"1abad187-e043-49a6-9671-1535ec064d28\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8bzs6" Apr 16 18:16:11.764321 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.763124 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/29f69f9c-834d-4ff7-92ac-005e00d0651c-etc-modprobe-d\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.764321 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.763154 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/29f69f9c-834d-4ff7-92ac-005e00d0651c-etc-sysctl-d\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.764321 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.763183 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/29f69f9c-834d-4ff7-92ac-005e00d0651c-var-lib-kubelet\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.764321 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.763207 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x9c54\" (UniqueName: \"kubernetes.io/projected/29f69f9c-834d-4ff7-92ac-005e00d0651c-kube-api-access-x9c54\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.764321 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.763276 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-var-lib-openvswitch\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.764321 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.763307 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-host-cni-bin\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.764321 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.763362 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-host-cni-netd\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.764321 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.763415 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x8kzb\" (UniqueName: \"kubernetes.io/projected/9eda7e8d-1d99-41d3-acfb-b6c80829811c-kube-api-access-x8kzb\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.764321 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.763440 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-run-systemd\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.764321 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.763468 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/259c30de-27f1-414c-b384-b90b6e241cd8-os-release\") pod \"multus-additional-cni-plugins-lxx4t\" (UID: \"259c30de-27f1-414c-b384-b90b6e241cd8\") " pod="openshift-multus/multus-additional-cni-plugins-lxx4t" Apr 16 18:16:11.764321 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.763551 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9eda7e8d-1d99-41d3-acfb-b6c80829811c-ovnkube-script-lib\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.764321 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.763589 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-host-slash\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.764321 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.763681 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-etc-kubernetes\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.764321 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.763733 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/259c30de-27f1-414c-b384-b90b6e241cd8-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-lxx4t\" (UID: \"259c30de-27f1-414c-b384-b90b6e241cd8\") " pod="openshift-multus/multus-additional-cni-plugins-lxx4t" Apr 16 18:16:11.764321 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.763820 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/1abad187-e043-49a6-9671-1535ec064d28-sys-fs\") pod \"aws-ebs-csi-driver-node-8bzs6\" (UID: \"1abad187-e043-49a6-9671-1535ec064d28\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8bzs6" Apr 16 18:16:11.764321 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.763839 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/259c30de-27f1-414c-b384-b90b6e241cd8-cnibin\") pod \"multus-additional-cni-plugins-lxx4t\" (UID: \"259c30de-27f1-414c-b384-b90b6e241cd8\") " pod="openshift-multus/multus-additional-cni-plugins-lxx4t" Apr 16 18:16:11.765047 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.763864 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fbc7bbf3-3e05-4fdd-ad20-de2ed4f13e8d-serviceca\") pod \"node-ca-d4t7h\" (UID: \"fbc7bbf3-3e05-4fdd-ad20-de2ed4f13e8d\") " pod="openshift-image-registry/node-ca-d4t7h" Apr 16 18:16:11.765047 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.763900 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/29f69f9c-834d-4ff7-92ac-005e00d0651c-etc-kubernetes\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.765047 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.764008 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-cnibin\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.765047 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.764057 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/259c30de-27f1-414c-b384-b90b6e241cd8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lxx4t\" (UID: \"259c30de-27f1-414c-b384-b90b6e241cd8\") " pod="openshift-multus/multus-additional-cni-plugins-lxx4t" Apr 16 18:16:11.765047 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.764081 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/29f69f9c-834d-4ff7-92ac-005e00d0651c-etc-kubernetes\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.765047 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.763946 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-cnibin\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.765047 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.764142 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/259c30de-27f1-414c-b384-b90b6e241cd8-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-lxx4t\" (UID: \"259c30de-27f1-414c-b384-b90b6e241cd8\") " pod="openshift-multus/multus-additional-cni-plugins-lxx4t" Apr 16 18:16:11.765047 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.764175 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-host-run-netns\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.765047 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.764207 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-log-socket\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.765047 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.764209 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-var-lib-openvswitch\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.765047 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.764274 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/259c30de-27f1-414c-b384-b90b6e241cd8-cni-binary-copy\") pod \"multus-additional-cni-plugins-lxx4t\" (UID: \"259c30de-27f1-414c-b384-b90b6e241cd8\") " pod="openshift-multus/multus-additional-cni-plugins-lxx4t" Apr 16 18:16:11.765047 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.764353 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/29f69f9c-834d-4ff7-92ac-005e00d0651c-etc-modprobe-d\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.765047 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.764439 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-run-systemd\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.765047 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.764413 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/b0897e32-576e-42ee-a9c4-bf56f480aba0-agent-certs\") pod \"konnectivity-agent-jkbvg\" (UID: \"b0897e32-576e-42ee-a9c4-bf56f480aba0\") " pod="kube-system/konnectivity-agent-jkbvg" Apr 16 18:16:11.765047 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.764573 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/29f69f9c-834d-4ff7-92ac-005e00d0651c-etc-sysconfig\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.765047 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.764566 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/1abad187-e043-49a6-9671-1535ec064d28-sys-fs\") pod \"aws-ebs-csi-driver-node-8bzs6\" (UID: \"1abad187-e043-49a6-9671-1535ec064d28\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8bzs6" Apr 16 18:16:11.765047 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.764610 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lb8gp\" (UniqueName: \"kubernetes.io/projected/f8eeffdd-37a1-4898-94ea-20c490313c34-kube-api-access-lb8gp\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.765832 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.764641 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/b0897e32-576e-42ee-a9c4-bf56f480aba0-konnectivity-ca\") pod \"konnectivity-agent-jkbvg\" (UID: \"b0897e32-576e-42ee-a9c4-bf56f480aba0\") " pod="kube-system/konnectivity-agent-jkbvg" Apr 16 18:16:11.765832 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.764678 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/29f69f9c-834d-4ff7-92ac-005e00d0651c-host\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.765832 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:11.764820 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 18:16:11.765832 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:11.764844 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 18:16:11.765832 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:11.764858 2570 projected.go:194] Error preparing data for projected volume kube-api-access-96lzw for pod openshift-network-diagnostics/network-check-target-fzg6h: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:16:11.765832 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.764919 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/1abad187-e043-49a6-9671-1535ec064d28-etc-selinux\") pod \"aws-ebs-csi-driver-node-8bzs6\" (UID: \"1abad187-e043-49a6-9671-1535ec064d28\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8bzs6" Apr 16 18:16:11.765832 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.764958 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/1abad187-e043-49a6-9671-1535ec064d28-device-dir\") pod \"aws-ebs-csi-driver-node-8bzs6\" (UID: \"1abad187-e043-49a6-9671-1535ec064d28\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8bzs6" Apr 16 18:16:11.765832 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.764961 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/29f69f9c-834d-4ff7-92ac-005e00d0651c-etc-sysctl-conf\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.765832 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.765281 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/29f69f9c-834d-4ff7-92ac-005e00d0651c-etc-sysctl-conf\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.765832 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:11.765326 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d72f360-ffda-4447-8b43-c1059ff81bf3-kube-api-access-96lzw podName:6d72f360-ffda-4447-8b43-c1059ff81bf3 nodeName:}" failed. No retries permitted until 2026-04-16 18:16:12.265297135 +0000 UTC m=+3.071049937 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-96lzw" (UniqueName: "kubernetes.io/projected/6d72f360-ffda-4447-8b43-c1059ff81bf3-kube-api-access-96lzw") pod "network-check-target-fzg6h" (UID: "6d72f360-ffda-4447-8b43-c1059ff81bf3") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:16:11.765832 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.765399 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/29f69f9c-834d-4ff7-92ac-005e00d0651c-etc-sysctl-d\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.765832 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.765469 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/29f69f9c-834d-4ff7-92ac-005e00d0651c-var-lib-kubelet\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.765832 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.765535 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29f69f9c-834d-4ff7-92ac-005e00d0651c-tmp\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.765832 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.765582 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9eda7e8d-1d99-41d3-acfb-b6c80829811c-ovn-node-metrics-cert\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.765832 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.765614 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f8eeffdd-37a1-4898-94ea-20c490313c34-multus-socket-dir-parent\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.765832 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.765671 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/259c30de-27f1-414c-b384-b90b6e241cd8-os-release\") pod \"multus-additional-cni-plugins-lxx4t\" (UID: \"259c30de-27f1-414c-b384-b90b6e241cd8\") " pod="openshift-multus/multus-additional-cni-plugins-lxx4t" Apr 16 18:16:11.765832 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.765716 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/29f69f9c-834d-4ff7-92ac-005e00d0651c-etc-sysconfig\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.766643 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.765755 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-host-cni-bin\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.766643 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.765761 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-host-run-netns\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.766643 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.766149 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/b0897e32-576e-42ee-a9c4-bf56f480aba0-konnectivity-ca\") pod \"konnectivity-agent-jkbvg\" (UID: \"b0897e32-576e-42ee-a9c4-bf56f480aba0\") " pod="kube-system/konnectivity-agent-jkbvg" Apr 16 18:16:11.766643 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.766272 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/1abad187-e043-49a6-9671-1535ec064d28-etc-selinux\") pod \"aws-ebs-csi-driver-node-8bzs6\" (UID: \"1abad187-e043-49a6-9671-1535ec064d28\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8bzs6" Apr 16 18:16:11.766643 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.766296 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/259c30de-27f1-414c-b384-b90b6e241cd8-cni-binary-copy\") pod \"multus-additional-cni-plugins-lxx4t\" (UID: \"259c30de-27f1-414c-b384-b90b6e241cd8\") " pod="openshift-multus/multus-additional-cni-plugins-lxx4t" Apr 16 18:16:11.766643 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.766285 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/259c30de-27f1-414c-b384-b90b6e241cd8-system-cni-dir\") pod \"multus-additional-cni-plugins-lxx4t\" (UID: \"259c30de-27f1-414c-b384-b90b6e241cd8\") " pod="openshift-multus/multus-additional-cni-plugins-lxx4t" Apr 16 18:16:11.766643 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.766345 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-log-socket\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.766643 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.766425 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9eda7e8d-1d99-41d3-acfb-b6c80829811c-host-cni-netd\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.766643 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.766584 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f8eeffdd-37a1-4898-94ea-20c490313c34-multus-daemon-config\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.768012 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.767968 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfffv\" (UniqueName: \"kubernetes.io/projected/1abad187-e043-49a6-9671-1535ec064d28-kube-api-access-mfffv\") pod \"aws-ebs-csi-driver-node-8bzs6\" (UID: \"1abad187-e043-49a6-9671-1535ec064d28\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8bzs6" Apr 16 18:16:11.768342 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.768285 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/b0897e32-576e-42ee-a9c4-bf56f480aba0-agent-certs\") pod \"konnectivity-agent-jkbvg\" (UID: \"b0897e32-576e-42ee-a9c4-bf56f480aba0\") " pod="kube-system/konnectivity-agent-jkbvg" Apr 16 18:16:11.770059 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.770028 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldln7\" (UniqueName: \"kubernetes.io/projected/5af0e6ec-389a-47dd-afc0-725b505e4635-kube-api-access-ldln7\") pod \"network-metrics-daemon-hgcdt\" (UID: \"5af0e6ec-389a-47dd-afc0-725b505e4635\") " pod="openshift-multus/network-metrics-daemon-hgcdt" Apr 16 18:16:11.771098 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.771059 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjnsj\" (UniqueName: \"kubernetes.io/projected/1b14fadb-4a71-439d-84de-91c5c3e29811-kube-api-access-fjnsj\") pod \"iptables-alerter-5jc6x\" (UID: \"1b14fadb-4a71-439d-84de-91c5c3e29811\") " pod="openshift-network-operator/iptables-alerter-5jc6x" Apr 16 18:16:11.772348 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.772305 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4wd4\" (UniqueName: \"kubernetes.io/projected/259c30de-27f1-414c-b384-b90b6e241cd8-kube-api-access-d4wd4\") pod \"multus-additional-cni-plugins-lxx4t\" (UID: \"259c30de-27f1-414c-b384-b90b6e241cd8\") " pod="openshift-multus/multus-additional-cni-plugins-lxx4t" Apr 16 18:16:11.773680 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.773657 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8kzb\" (UniqueName: \"kubernetes.io/projected/9eda7e8d-1d99-41d3-acfb-b6c80829811c-kube-api-access-x8kzb\") pod \"ovnkube-node-hls95\" (UID: \"9eda7e8d-1d99-41d3-acfb-b6c80829811c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.773998 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.773975 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9c54\" (UniqueName: \"kubernetes.io/projected/29f69f9c-834d-4ff7-92ac-005e00d0651c-kube-api-access-x9c54\") pod \"tuned-5z2wk\" (UID: \"29f69f9c-834d-4ff7-92ac-005e00d0651c\") " pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.774970 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.774932 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb8gp\" (UniqueName: \"kubernetes.io/projected/f8eeffdd-37a1-4898-94ea-20c490313c34-kube-api-access-lb8gp\") pod \"multus-65f6j\" (UID: \"f8eeffdd-37a1-4898-94ea-20c490313c34\") " pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.866154 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.866124 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fbc7bbf3-3e05-4fdd-ad20-de2ed4f13e8d-host\") pod \"node-ca-d4t7h\" (UID: \"fbc7bbf3-3e05-4fdd-ad20-de2ed4f13e8d\") " pod="openshift-image-registry/node-ca-d4t7h" Apr 16 18:16:11.866350 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.866264 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fbc7bbf3-3e05-4fdd-ad20-de2ed4f13e8d-host\") pod \"node-ca-d4t7h\" (UID: \"fbc7bbf3-3e05-4fdd-ad20-de2ed4f13e8d\") " pod="openshift-image-registry/node-ca-d4t7h" Apr 16 18:16:11.866350 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.866299 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-86k95\" (UniqueName: \"kubernetes.io/projected/fbc7bbf3-3e05-4fdd-ad20-de2ed4f13e8d-kube-api-access-86k95\") pod \"node-ca-d4t7h\" (UID: \"fbc7bbf3-3e05-4fdd-ad20-de2ed4f13e8d\") " pod="openshift-image-registry/node-ca-d4t7h" Apr 16 18:16:11.866455 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.866357 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fbc7bbf3-3e05-4fdd-ad20-de2ed4f13e8d-serviceca\") pod \"node-ca-d4t7h\" (UID: \"fbc7bbf3-3e05-4fdd-ad20-de2ed4f13e8d\") " pod="openshift-image-registry/node-ca-d4t7h" Apr 16 18:16:11.866804 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.866785 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fbc7bbf3-3e05-4fdd-ad20-de2ed4f13e8d-serviceca\") pod \"node-ca-d4t7h\" (UID: \"fbc7bbf3-3e05-4fdd-ad20-de2ed4f13e8d\") " pod="openshift-image-registry/node-ca-d4t7h" Apr 16 18:16:11.874473 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.874445 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-86k95\" (UniqueName: \"kubernetes.io/projected/fbc7bbf3-3e05-4fdd-ad20-de2ed4f13e8d-kube-api-access-86k95\") pod \"node-ca-d4t7h\" (UID: \"fbc7bbf3-3e05-4fdd-ad20-de2ed4f13e8d\") " pod="openshift-image-registry/node-ca-d4t7h" Apr 16 18:16:11.952494 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.952405 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jc6x" Apr 16 18:16:11.958286 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.958260 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" Apr 16 18:16:11.966284 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.966263 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-65f6j" Apr 16 18:16:11.970924 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.970902 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:11.977556 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.977535 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-jkbvg" Apr 16 18:16:11.983157 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.983141 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8bzs6" Apr 16 18:16:11.988709 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.988684 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lxx4t" Apr 16 18:16:11.994243 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:11.994212 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-d4t7h" Apr 16 18:16:12.119644 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:12.119612 2570 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 18:16:12.268779 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:12.268710 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5af0e6ec-389a-47dd-afc0-725b505e4635-metrics-certs\") pod \"network-metrics-daemon-hgcdt\" (UID: \"5af0e6ec-389a-47dd-afc0-725b505e4635\") " pod="openshift-multus/network-metrics-daemon-hgcdt" Apr 16 18:16:12.268779 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:12.268764 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-96lzw\" (UniqueName: \"kubernetes.io/projected/6d72f360-ffda-4447-8b43-c1059ff81bf3-kube-api-access-96lzw\") pod \"network-check-target-fzg6h\" (UID: \"6d72f360-ffda-4447-8b43-c1059ff81bf3\") " pod="openshift-network-diagnostics/network-check-target-fzg6h" Apr 16 18:16:12.268937 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:12.268879 2570 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:16:12.268968 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:12.268959 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5af0e6ec-389a-47dd-afc0-725b505e4635-metrics-certs podName:5af0e6ec-389a-47dd-afc0-725b505e4635 nodeName:}" failed. No retries permitted until 2026-04-16 18:16:13.268938182 +0000 UTC m=+4.074690979 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5af0e6ec-389a-47dd-afc0-725b505e4635-metrics-certs") pod "network-metrics-daemon-hgcdt" (UID: "5af0e6ec-389a-47dd-afc0-725b505e4635") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:16:12.269013 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:12.268886 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 18:16:12.269013 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:12.268981 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 18:16:12.269013 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:12.268994 2570 projected.go:194] Error preparing data for projected volume kube-api-access-96lzw for pod openshift-network-diagnostics/network-check-target-fzg6h: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:16:12.269098 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:12.269040 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d72f360-ffda-4447-8b43-c1059ff81bf3-kube-api-access-96lzw podName:6d72f360-ffda-4447-8b43-c1059ff81bf3 nodeName:}" failed. No retries permitted until 2026-04-16 18:16:13.269028244 +0000 UTC m=+4.074781038 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-96lzw" (UniqueName: "kubernetes.io/projected/6d72f360-ffda-4447-8b43-c1059ff81bf3-kube-api-access-96lzw") pod "network-check-target-fzg6h" (UID: "6d72f360-ffda-4447-8b43-c1059ff81bf3") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:16:12.305715 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:12.305671 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9eda7e8d_1d99_41d3_acfb_b6c80829811c.slice/crio-8ada828ad17fc7a2b7777dffa879d45629f0503a978e1cc7af828cb5d8dfd51a WatchSource:0}: Error finding container 8ada828ad17fc7a2b7777dffa879d45629f0503a978e1cc7af828cb5d8dfd51a: Status 404 returned error can't find the container with id 8ada828ad17fc7a2b7777dffa879d45629f0503a978e1cc7af828cb5d8dfd51a Apr 16 18:16:12.313637 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:12.313608 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8eeffdd_37a1_4898_94ea_20c490313c34.slice/crio-2da936e76e082f07a85954312a197fecaafbdc3c504c2ff919f92d05f2b81788 WatchSource:0}: Error finding container 2da936e76e082f07a85954312a197fecaafbdc3c504c2ff919f92d05f2b81788: Status 404 returned error can't find the container with id 2da936e76e082f07a85954312a197fecaafbdc3c504c2ff919f92d05f2b81788 Apr 16 18:16:12.315336 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:12.315310 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbc7bbf3_3e05_4fdd_ad20_de2ed4f13e8d.slice/crio-8c6ac2fda6e58b1b9ddac13702171c3bbfa19e7462d98bb4c877c13116ec46d0 WatchSource:0}: Error finding container 8c6ac2fda6e58b1b9ddac13702171c3bbfa19e7462d98bb4c877c13116ec46d0: Status 404 returned error can't find the container with id 8c6ac2fda6e58b1b9ddac13702171c3bbfa19e7462d98bb4c877c13116ec46d0 Apr 16 18:16:12.315856 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:12.315806 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod259c30de_27f1_414c_b384_b90b6e241cd8.slice/crio-eec89f1d791dff17f09f6b074a1c6383f8a0efcf8903f19fa78237878c9e97ac WatchSource:0}: Error finding container eec89f1d791dff17f09f6b074a1c6383f8a0efcf8903f19fa78237878c9e97ac: Status 404 returned error can't find the container with id eec89f1d791dff17f09f6b074a1c6383f8a0efcf8903f19fa78237878c9e97ac Apr 16 18:16:12.316805 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:12.316740 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1abad187_e043_49a6_9671_1535ec064d28.slice/crio-df30e337606b0c0384894f570371a0e79649ab447058e9e7202c0bda9be78ff4 WatchSource:0}: Error finding container df30e337606b0c0384894f570371a0e79649ab447058e9e7202c0bda9be78ff4: Status 404 returned error can't find the container with id df30e337606b0c0384894f570371a0e79649ab447058e9e7202c0bda9be78ff4 Apr 16 18:16:12.317304 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:12.317287 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0897e32_576e_42ee_a9c4_bf56f480aba0.slice/crio-8a373dfc7e113b4cf5c637880d732d3fef5deed43949225180bd51ef272fb446 WatchSource:0}: Error finding container 8a373dfc7e113b4cf5c637880d732d3fef5deed43949225180bd51ef272fb446: Status 404 returned error can't find the container with id 8a373dfc7e113b4cf5c637880d732d3fef5deed43949225180bd51ef272fb446 Apr 16 18:16:12.319535 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:12.319461 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b14fadb_4a71_439d_84de_91c5c3e29811.slice/crio-1f04e8e805420207dca5618f315581dac430dc3a5776345dd94a6b7756ffab6e WatchSource:0}: Error finding container 1f04e8e805420207dca5618f315581dac430dc3a5776345dd94a6b7756ffab6e: Status 404 returned error can't find the container with id 1f04e8e805420207dca5618f315581dac430dc3a5776345dd94a6b7756ffab6e Apr 16 18:16:12.733835 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:12.733554 2570 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-15 18:11:10 +0000 UTC" deadline="2028-01-27 00:15:22.625897078 +0000 UTC" Apr 16 18:16:12.733835 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:12.733758 2570 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="15605h59m9.892143249s" Apr 16 18:16:12.777705 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:12.776983 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fzg6h" Apr 16 18:16:12.777705 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:12.777096 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fzg6h" podUID="6d72f360-ffda-4447-8b43-c1059ff81bf3" Apr 16 18:16:12.777705 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:12.777553 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hgcdt" Apr 16 18:16:12.777705 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:12.777658 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hgcdt" podUID="5af0e6ec-389a-47dd-afc0-725b505e4635" Apr 16 18:16:12.788837 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:12.788778 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jc6x" event={"ID":"1b14fadb-4a71-439d-84de-91c5c3e29811","Type":"ContainerStarted","Data":"1f04e8e805420207dca5618f315581dac430dc3a5776345dd94a6b7756ffab6e"} Apr 16 18:16:12.790612 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:12.790559 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-jkbvg" event={"ID":"b0897e32-576e-42ee-a9c4-bf56f480aba0","Type":"ContainerStarted","Data":"8a373dfc7e113b4cf5c637880d732d3fef5deed43949225180bd51ef272fb446"} Apr 16 18:16:12.798489 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:12.798450 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-d4t7h" event={"ID":"fbc7bbf3-3e05-4fdd-ad20-de2ed4f13e8d","Type":"ContainerStarted","Data":"8c6ac2fda6e58b1b9ddac13702171c3bbfa19e7462d98bb4c877c13116ec46d0"} Apr 16 18:16:12.808892 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:12.808865 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lxx4t" event={"ID":"259c30de-27f1-414c-b384-b90b6e241cd8","Type":"ContainerStarted","Data":"eec89f1d791dff17f09f6b074a1c6383f8a0efcf8903f19fa78237878c9e97ac"} Apr 16 18:16:12.811058 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:12.811033 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-65f6j" event={"ID":"f8eeffdd-37a1-4898-94ea-20c490313c34","Type":"ContainerStarted","Data":"2da936e76e082f07a85954312a197fecaafbdc3c504c2ff919f92d05f2b81788"} Apr 16 18:16:12.816379 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:12.816354 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" event={"ID":"29f69f9c-834d-4ff7-92ac-005e00d0651c","Type":"ContainerStarted","Data":"e4c745d26c83542110da36f4afd14011ffe403e5b94d393e55a42e8e6129a684"} Apr 16 18:16:12.820215 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:12.820192 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8bzs6" event={"ID":"1abad187-e043-49a6-9671-1535ec064d28","Type":"ContainerStarted","Data":"df30e337606b0c0384894f570371a0e79649ab447058e9e7202c0bda9be78ff4"} Apr 16 18:16:12.831706 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:12.831680 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hls95" event={"ID":"9eda7e8d-1d99-41d3-acfb-b6c80829811c","Type":"ContainerStarted","Data":"8ada828ad17fc7a2b7777dffa879d45629f0503a978e1cc7af828cb5d8dfd51a"} Apr 16 18:16:12.834667 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:12.834643 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-141-219.ec2.internal" event={"ID":"4f9f135ae9937d0daa3eb597d8fe2521","Type":"ContainerStarted","Data":"14b4d9acc62f82c00c47fe0139b6c91714f61070ff20187344ce5244b003e3c6"} Apr 16 18:16:13.280215 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:13.279361 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-96lzw\" (UniqueName: \"kubernetes.io/projected/6d72f360-ffda-4447-8b43-c1059ff81bf3-kube-api-access-96lzw\") pod \"network-check-target-fzg6h\" (UID: \"6d72f360-ffda-4447-8b43-c1059ff81bf3\") " pod="openshift-network-diagnostics/network-check-target-fzg6h" Apr 16 18:16:13.280215 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:13.279449 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5af0e6ec-389a-47dd-afc0-725b505e4635-metrics-certs\") pod \"network-metrics-daemon-hgcdt\" (UID: \"5af0e6ec-389a-47dd-afc0-725b505e4635\") " pod="openshift-multus/network-metrics-daemon-hgcdt" Apr 16 18:16:13.280215 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:13.279589 2570 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:16:13.280215 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:13.279653 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5af0e6ec-389a-47dd-afc0-725b505e4635-metrics-certs podName:5af0e6ec-389a-47dd-afc0-725b505e4635 nodeName:}" failed. No retries permitted until 2026-04-16 18:16:15.279629185 +0000 UTC m=+6.085381971 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5af0e6ec-389a-47dd-afc0-725b505e4635-metrics-certs") pod "network-metrics-daemon-hgcdt" (UID: "5af0e6ec-389a-47dd-afc0-725b505e4635") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:16:13.280215 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:13.280035 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 18:16:13.280215 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:13.280051 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 18:16:13.280215 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:13.280062 2570 projected.go:194] Error preparing data for projected volume kube-api-access-96lzw for pod openshift-network-diagnostics/network-check-target-fzg6h: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:16:13.280215 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:13.280100 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d72f360-ffda-4447-8b43-c1059ff81bf3-kube-api-access-96lzw podName:6d72f360-ffda-4447-8b43-c1059ff81bf3 nodeName:}" failed. No retries permitted until 2026-04-16 18:16:15.280087027 +0000 UTC m=+6.085839808 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-96lzw" (UniqueName: "kubernetes.io/projected/6d72f360-ffda-4447-8b43-c1059ff81bf3-kube-api-access-96lzw") pod "network-check-target-fzg6h" (UID: "6d72f360-ffda-4447-8b43-c1059ff81bf3") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:16:13.852258 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:13.851007 2570 generic.go:358] "Generic (PLEG): container finished" podID="7f66aba8a476534f5fbd8eab5bc63db2" containerID="d05ae08ec860148d5845b975cfb01e24bc712f4492ef20c1220908b35a70fce8" exitCode=0 Apr 16 18:16:13.852258 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:13.852013 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-141-219.ec2.internal" event={"ID":"7f66aba8a476534f5fbd8eab5bc63db2","Type":"ContainerDied","Data":"d05ae08ec860148d5845b975cfb01e24bc712f4492ef20c1220908b35a70fce8"} Apr 16 18:16:13.866877 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:13.866827 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-proxy-ip-10-0-141-219.ec2.internal" podStartSLOduration=2.866810427 podStartE2EDuration="2.866810427s" podCreationTimestamp="2026-04-16 18:16:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:16:12.848801384 +0000 UTC m=+3.654554185" watchObservedRunningTime="2026-04-16 18:16:13.866810427 +0000 UTC m=+4.672563229" Apr 16 18:16:14.777276 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:14.777246 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fzg6h" Apr 16 18:16:14.777447 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:14.777368 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fzg6h" podUID="6d72f360-ffda-4447-8b43-c1059ff81bf3" Apr 16 18:16:14.777527 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:14.777462 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hgcdt" Apr 16 18:16:14.777586 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:14.777538 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hgcdt" podUID="5af0e6ec-389a-47dd-afc0-725b505e4635" Apr 16 18:16:14.857097 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:14.857058 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-141-219.ec2.internal" event={"ID":"7f66aba8a476534f5fbd8eab5bc63db2","Type":"ContainerStarted","Data":"8bbb8e692a28d3ee7df876d8801bbb1494628e3b7222f70d1ca32fa10b1ec368"} Apr 16 18:16:15.296545 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:15.295870 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-96lzw\" (UniqueName: \"kubernetes.io/projected/6d72f360-ffda-4447-8b43-c1059ff81bf3-kube-api-access-96lzw\") pod \"network-check-target-fzg6h\" (UID: \"6d72f360-ffda-4447-8b43-c1059ff81bf3\") " pod="openshift-network-diagnostics/network-check-target-fzg6h" Apr 16 18:16:15.296545 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:15.295924 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5af0e6ec-389a-47dd-afc0-725b505e4635-metrics-certs\") pod \"network-metrics-daemon-hgcdt\" (UID: \"5af0e6ec-389a-47dd-afc0-725b505e4635\") " pod="openshift-multus/network-metrics-daemon-hgcdt" Apr 16 18:16:15.296545 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:15.296088 2570 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:16:15.296545 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:15.296147 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5af0e6ec-389a-47dd-afc0-725b505e4635-metrics-certs podName:5af0e6ec-389a-47dd-afc0-725b505e4635 nodeName:}" failed. No retries permitted until 2026-04-16 18:16:19.296128073 +0000 UTC m=+10.101880864 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5af0e6ec-389a-47dd-afc0-725b505e4635-metrics-certs") pod "network-metrics-daemon-hgcdt" (UID: "5af0e6ec-389a-47dd-afc0-725b505e4635") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:16:15.296844 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:15.296590 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 18:16:15.296844 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:15.296608 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 18:16:15.296844 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:15.296622 2570 projected.go:194] Error preparing data for projected volume kube-api-access-96lzw for pod openshift-network-diagnostics/network-check-target-fzg6h: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:16:15.296844 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:15.296669 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d72f360-ffda-4447-8b43-c1059ff81bf3-kube-api-access-96lzw podName:6d72f360-ffda-4447-8b43-c1059ff81bf3 nodeName:}" failed. No retries permitted until 2026-04-16 18:16:19.296654053 +0000 UTC m=+10.102406855 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-96lzw" (UniqueName: "kubernetes.io/projected/6d72f360-ffda-4447-8b43-c1059ff81bf3-kube-api-access-96lzw") pod "network-check-target-fzg6h" (UID: "6d72f360-ffda-4447-8b43-c1059ff81bf3") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:16:16.777602 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:16.777565 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hgcdt" Apr 16 18:16:16.778012 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:16.777717 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hgcdt" podUID="5af0e6ec-389a-47dd-afc0-725b505e4635" Apr 16 18:16:16.778012 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:16.777951 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fzg6h" Apr 16 18:16:16.778131 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:16.778102 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fzg6h" podUID="6d72f360-ffda-4447-8b43-c1059ff81bf3" Apr 16 18:16:18.777998 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:18.777963 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hgcdt" Apr 16 18:16:18.778507 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:18.777963 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fzg6h" Apr 16 18:16:18.778507 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:18.778103 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hgcdt" podUID="5af0e6ec-389a-47dd-afc0-725b505e4635" Apr 16 18:16:18.778507 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:18.778181 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fzg6h" podUID="6d72f360-ffda-4447-8b43-c1059ff81bf3" Apr 16 18:16:19.330354 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:19.330308 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-96lzw\" (UniqueName: \"kubernetes.io/projected/6d72f360-ffda-4447-8b43-c1059ff81bf3-kube-api-access-96lzw\") pod \"network-check-target-fzg6h\" (UID: \"6d72f360-ffda-4447-8b43-c1059ff81bf3\") " pod="openshift-network-diagnostics/network-check-target-fzg6h" Apr 16 18:16:19.330544 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:19.330373 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5af0e6ec-389a-47dd-afc0-725b505e4635-metrics-certs\") pod \"network-metrics-daemon-hgcdt\" (UID: \"5af0e6ec-389a-47dd-afc0-725b505e4635\") " pod="openshift-multus/network-metrics-daemon-hgcdt" Apr 16 18:16:19.330544 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:19.330494 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 18:16:19.330544 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:19.330523 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 18:16:19.330544 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:19.330535 2570 projected.go:194] Error preparing data for projected volume kube-api-access-96lzw for pod openshift-network-diagnostics/network-check-target-fzg6h: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:16:19.330737 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:19.330502 2570 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:16:19.330737 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:19.330607 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d72f360-ffda-4447-8b43-c1059ff81bf3-kube-api-access-96lzw podName:6d72f360-ffda-4447-8b43-c1059ff81bf3 nodeName:}" failed. No retries permitted until 2026-04-16 18:16:27.330586645 +0000 UTC m=+18.136339444 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-96lzw" (UniqueName: "kubernetes.io/projected/6d72f360-ffda-4447-8b43-c1059ff81bf3-kube-api-access-96lzw") pod "network-check-target-fzg6h" (UID: "6d72f360-ffda-4447-8b43-c1059ff81bf3") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:16:19.330737 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:19.330634 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5af0e6ec-389a-47dd-afc0-725b505e4635-metrics-certs podName:5af0e6ec-389a-47dd-afc0-725b505e4635 nodeName:}" failed. No retries permitted until 2026-04-16 18:16:27.330621564 +0000 UTC m=+18.136374348 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5af0e6ec-389a-47dd-afc0-725b505e4635-metrics-certs") pod "network-metrics-daemon-hgcdt" (UID: "5af0e6ec-389a-47dd-afc0-725b505e4635") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:16:20.777709 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:20.777665 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fzg6h" Apr 16 18:16:20.778177 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:20.777790 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fzg6h" podUID="6d72f360-ffda-4447-8b43-c1059ff81bf3" Apr 16 18:16:20.778177 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:20.777842 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hgcdt" Apr 16 18:16:20.778177 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:20.777964 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hgcdt" podUID="5af0e6ec-389a-47dd-afc0-725b505e4635" Apr 16 18:16:22.777483 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:22.777444 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fzg6h" Apr 16 18:16:22.777483 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:22.777464 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hgcdt" Apr 16 18:16:22.778003 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:22.777562 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fzg6h" podUID="6d72f360-ffda-4447-8b43-c1059ff81bf3" Apr 16 18:16:22.778003 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:22.777708 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hgcdt" podUID="5af0e6ec-389a-47dd-afc0-725b505e4635" Apr 16 18:16:24.299926 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:24.299878 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-141-219.ec2.internal" podStartSLOduration=13.299863806 podStartE2EDuration="13.299863806s" podCreationTimestamp="2026-04-16 18:16:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:16:14.872426803 +0000 UTC m=+5.678179601" watchObservedRunningTime="2026-04-16 18:16:24.299863806 +0000 UTC m=+15.105616608" Apr 16 18:16:24.300345 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:24.300110 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-svsjw"] Apr 16 18:16:24.348302 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:24.348273 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-svsjw" Apr 16 18:16:24.350736 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:24.350710 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Apr 16 18:16:24.351023 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:24.351006 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-2xs7s\"" Apr 16 18:16:24.351701 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:24.351679 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Apr 16 18:16:24.466091 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:24.466065 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c77bc112-2094-4908-98e9-9722eea678f2-hosts-file\") pod \"node-resolver-svsjw\" (UID: \"c77bc112-2094-4908-98e9-9722eea678f2\") " pod="openshift-dns/node-resolver-svsjw" Apr 16 18:16:24.466257 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:24.466095 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c77bc112-2094-4908-98e9-9722eea678f2-tmp-dir\") pod \"node-resolver-svsjw\" (UID: \"c77bc112-2094-4908-98e9-9722eea678f2\") " pod="openshift-dns/node-resolver-svsjw" Apr 16 18:16:24.466257 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:24.466195 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgt5q\" (UniqueName: \"kubernetes.io/projected/c77bc112-2094-4908-98e9-9722eea678f2-kube-api-access-hgt5q\") pod \"node-resolver-svsjw\" (UID: \"c77bc112-2094-4908-98e9-9722eea678f2\") " pod="openshift-dns/node-resolver-svsjw" Apr 16 18:16:24.567166 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:24.567093 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hgt5q\" (UniqueName: \"kubernetes.io/projected/c77bc112-2094-4908-98e9-9722eea678f2-kube-api-access-hgt5q\") pod \"node-resolver-svsjw\" (UID: \"c77bc112-2094-4908-98e9-9722eea678f2\") " pod="openshift-dns/node-resolver-svsjw" Apr 16 18:16:24.567166 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:24.567137 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c77bc112-2094-4908-98e9-9722eea678f2-hosts-file\") pod \"node-resolver-svsjw\" (UID: \"c77bc112-2094-4908-98e9-9722eea678f2\") " pod="openshift-dns/node-resolver-svsjw" Apr 16 18:16:24.567377 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:24.567167 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c77bc112-2094-4908-98e9-9722eea678f2-tmp-dir\") pod \"node-resolver-svsjw\" (UID: \"c77bc112-2094-4908-98e9-9722eea678f2\") " pod="openshift-dns/node-resolver-svsjw" Apr 16 18:16:24.567377 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:24.567263 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c77bc112-2094-4908-98e9-9722eea678f2-hosts-file\") pod \"node-resolver-svsjw\" (UID: \"c77bc112-2094-4908-98e9-9722eea678f2\") " pod="openshift-dns/node-resolver-svsjw" Apr 16 18:16:24.567482 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:24.567469 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c77bc112-2094-4908-98e9-9722eea678f2-tmp-dir\") pod \"node-resolver-svsjw\" (UID: \"c77bc112-2094-4908-98e9-9722eea678f2\") " pod="openshift-dns/node-resolver-svsjw" Apr 16 18:16:24.577342 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:24.577314 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgt5q\" (UniqueName: \"kubernetes.io/projected/c77bc112-2094-4908-98e9-9722eea678f2-kube-api-access-hgt5q\") pod \"node-resolver-svsjw\" (UID: \"c77bc112-2094-4908-98e9-9722eea678f2\") " pod="openshift-dns/node-resolver-svsjw" Apr 16 18:16:24.658542 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:24.658507 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-svsjw" Apr 16 18:16:24.777616 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:24.777589 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hgcdt" Apr 16 18:16:24.777796 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:24.777715 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hgcdt" podUID="5af0e6ec-389a-47dd-afc0-725b505e4635" Apr 16 18:16:24.777796 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:24.777780 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fzg6h" Apr 16 18:16:24.777895 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:24.777875 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fzg6h" podUID="6d72f360-ffda-4447-8b43-c1059ff81bf3" Apr 16 18:16:26.777007 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:26.776964 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hgcdt" Apr 16 18:16:26.777513 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:26.776964 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fzg6h" Apr 16 18:16:26.777513 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:26.777115 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hgcdt" podUID="5af0e6ec-389a-47dd-afc0-725b505e4635" Apr 16 18:16:26.777513 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:26.777214 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fzg6h" podUID="6d72f360-ffda-4447-8b43-c1059ff81bf3" Apr 16 18:16:27.384916 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:27.384879 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-96lzw\" (UniqueName: \"kubernetes.io/projected/6d72f360-ffda-4447-8b43-c1059ff81bf3-kube-api-access-96lzw\") pod \"network-check-target-fzg6h\" (UID: \"6d72f360-ffda-4447-8b43-c1059ff81bf3\") " pod="openshift-network-diagnostics/network-check-target-fzg6h" Apr 16 18:16:27.384916 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:27.384928 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5af0e6ec-389a-47dd-afc0-725b505e4635-metrics-certs\") pod \"network-metrics-daemon-hgcdt\" (UID: \"5af0e6ec-389a-47dd-afc0-725b505e4635\") " pod="openshift-multus/network-metrics-daemon-hgcdt" Apr 16 18:16:27.385148 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:27.385041 2570 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:16:27.385148 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:27.385052 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 18:16:27.385148 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:27.385076 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 18:16:27.385148 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:27.385089 2570 projected.go:194] Error preparing data for projected volume kube-api-access-96lzw for pod openshift-network-diagnostics/network-check-target-fzg6h: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:16:27.385148 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:27.385110 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5af0e6ec-389a-47dd-afc0-725b505e4635-metrics-certs podName:5af0e6ec-389a-47dd-afc0-725b505e4635 nodeName:}" failed. No retries permitted until 2026-04-16 18:16:43.385089002 +0000 UTC m=+34.190841804 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5af0e6ec-389a-47dd-afc0-725b505e4635-metrics-certs") pod "network-metrics-daemon-hgcdt" (UID: "5af0e6ec-389a-47dd-afc0-725b505e4635") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:16:27.385148 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:27.385139 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d72f360-ffda-4447-8b43-c1059ff81bf3-kube-api-access-96lzw podName:6d72f360-ffda-4447-8b43-c1059ff81bf3 nodeName:}" failed. No retries permitted until 2026-04-16 18:16:43.385125241 +0000 UTC m=+34.190878043 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-96lzw" (UniqueName: "kubernetes.io/projected/6d72f360-ffda-4447-8b43-c1059ff81bf3-kube-api-access-96lzw") pod "network-check-target-fzg6h" (UID: "6d72f360-ffda-4447-8b43-c1059ff81bf3") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:16:28.777510 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:28.777489 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fzg6h" Apr 16 18:16:28.777732 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:28.777598 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fzg6h" podUID="6d72f360-ffda-4447-8b43-c1059ff81bf3" Apr 16 18:16:28.777732 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:28.777489 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hgcdt" Apr 16 18:16:28.777732 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:28.777702 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hgcdt" podUID="5af0e6ec-389a-47dd-afc0-725b505e4635" Apr 16 18:16:28.980140 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:28.980110 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc77bc112_2094_4908_98e9_9722eea678f2.slice/crio-14b0fd975a435b9881470b2532a22b6858a64076d8c599cdf59d2d794b34b59e WatchSource:0}: Error finding container 14b0fd975a435b9881470b2532a22b6858a64076d8c599cdf59d2d794b34b59e: Status 404 returned error can't find the container with id 14b0fd975a435b9881470b2532a22b6858a64076d8c599cdf59d2d794b34b59e Apr 16 18:16:29.751218 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:29.750874 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kube-system/global-pull-secret-syncer-7b74c"] Apr 16 18:16:29.754134 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:29.754113 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-7b74c" Apr 16 18:16:29.754282 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:29.754183 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-7b74c" podUID="579176b9-8011-401d-aee2-a97cda1ea10f" Apr 16 18:16:29.801720 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:29.801687 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/579176b9-8011-401d-aee2-a97cda1ea10f-dbus\") pod \"global-pull-secret-syncer-7b74c\" (UID: \"579176b9-8011-401d-aee2-a97cda1ea10f\") " pod="kube-system/global-pull-secret-syncer-7b74c" Apr 16 18:16:29.802423 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:29.801728 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/579176b9-8011-401d-aee2-a97cda1ea10f-original-pull-secret\") pod \"global-pull-secret-syncer-7b74c\" (UID: \"579176b9-8011-401d-aee2-a97cda1ea10f\") " pod="kube-system/global-pull-secret-syncer-7b74c" Apr 16 18:16:29.802423 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:29.801756 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/579176b9-8011-401d-aee2-a97cda1ea10f-kubelet-config\") pod \"global-pull-secret-syncer-7b74c\" (UID: \"579176b9-8011-401d-aee2-a97cda1ea10f\") " pod="kube-system/global-pull-secret-syncer-7b74c" Apr 16 18:16:29.884192 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:29.884114 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hls95_9eda7e8d-1d99-41d3-acfb-b6c80829811c/ovn-acl-logging/0.log" Apr 16 18:16:29.884510 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:29.884483 2570 generic.go:358] "Generic (PLEG): container finished" podID="9eda7e8d-1d99-41d3-acfb-b6c80829811c" containerID="3a8a8a2e41ce17d633abc74d6a3bdda2b6664f5a59779b870162652c0efa993c" exitCode=1 Apr 16 18:16:29.884614 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:29.884559 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hls95" event={"ID":"9eda7e8d-1d99-41d3-acfb-b6c80829811c","Type":"ContainerStarted","Data":"b98a282a9c3930158cfd096b89f8ea2d6035d79d0d7345253a027a89607576ad"} Apr 16 18:16:29.884614 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:29.884597 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hls95" event={"ID":"9eda7e8d-1d99-41d3-acfb-b6c80829811c","Type":"ContainerStarted","Data":"06b4a0a810e3443d2ede3ec4874d921567ea147e9a3e60644285b2ab9c57720c"} Apr 16 18:16:29.884614 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:29.884611 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hls95" event={"ID":"9eda7e8d-1d99-41d3-acfb-b6c80829811c","Type":"ContainerStarted","Data":"d35684d714815fcb556965430399684407d21774e337f24a0dcd618ef50c9910"} Apr 16 18:16:29.884773 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:29.884620 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hls95" event={"ID":"9eda7e8d-1d99-41d3-acfb-b6c80829811c","Type":"ContainerStarted","Data":"dbfd93dd094bdd8b3800b40da5f079f7df49f304dba2bc8a55ff98a01ddc0e13"} Apr 16 18:16:29.884773 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:29.884628 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hls95" event={"ID":"9eda7e8d-1d99-41d3-acfb-b6c80829811c","Type":"ContainerDied","Data":"3a8a8a2e41ce17d633abc74d6a3bdda2b6664f5a59779b870162652c0efa993c"} Apr 16 18:16:29.884773 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:29.884641 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hls95" event={"ID":"9eda7e8d-1d99-41d3-acfb-b6c80829811c","Type":"ContainerStarted","Data":"a5093c241673105ca80fe680e4e5eb833d63c792c18ef2c7912c08fb565d436c"} Apr 16 18:16:29.885960 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:29.885934 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-svsjw" event={"ID":"c77bc112-2094-4908-98e9-9722eea678f2","Type":"ContainerStarted","Data":"e6b56b6630c910c63abc10aca47f71acf52cb3be0d19cb766b9fe92b3c2f2087"} Apr 16 18:16:29.886071 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:29.885970 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-svsjw" event={"ID":"c77bc112-2094-4908-98e9-9722eea678f2","Type":"ContainerStarted","Data":"14b0fd975a435b9881470b2532a22b6858a64076d8c599cdf59d2d794b34b59e"} Apr 16 18:16:29.887417 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:29.887379 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-jkbvg" event={"ID":"b0897e32-576e-42ee-a9c4-bf56f480aba0","Type":"ContainerStarted","Data":"deb4a5f44019828a36feac8e78e6069a2a522918865ecb74fc911080067de8ef"} Apr 16 18:16:29.888718 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:29.888696 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-d4t7h" event={"ID":"fbc7bbf3-3e05-4fdd-ad20-de2ed4f13e8d","Type":"ContainerStarted","Data":"27abed2219a7354941192294e0bcc50b8252d1623901a7e80d2ab85b1a6319f7"} Apr 16 18:16:29.890331 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:29.890305 2570 generic.go:358] "Generic (PLEG): container finished" podID="259c30de-27f1-414c-b384-b90b6e241cd8" containerID="168132e58492517d5d7a31edc3758151a8db771d481a24491c6b97a0c49b0eac" exitCode=0 Apr 16 18:16:29.890427 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:29.890337 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lxx4t" event={"ID":"259c30de-27f1-414c-b384-b90b6e241cd8","Type":"ContainerDied","Data":"168132e58492517d5d7a31edc3758151a8db771d481a24491c6b97a0c49b0eac"} Apr 16 18:16:29.892396 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:29.892352 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-65f6j" event={"ID":"f8eeffdd-37a1-4898-94ea-20c490313c34","Type":"ContainerStarted","Data":"91a0fcf12e043478fc0765e48edf1c983ced6a89a7aaafb4bd0a655ae4ceea2b"} Apr 16 18:16:29.895180 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:29.895157 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" event={"ID":"29f69f9c-834d-4ff7-92ac-005e00d0651c","Type":"ContainerStarted","Data":"0dacebc65ac090f2ae357ae2773d5be1134d42136edfacc00a3bf6fef05f196b"} Apr 16 18:16:29.896401 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:29.896380 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8bzs6" event={"ID":"1abad187-e043-49a6-9671-1535ec064d28","Type":"ContainerStarted","Data":"895506463c00017557ac74489dfca1066ffd67e54b43c3be481b5fb6b747cdf6"} Apr 16 18:16:29.899161 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:29.899120 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-svsjw" podStartSLOduration=5.899108759 podStartE2EDuration="5.899108759s" podCreationTimestamp="2026-04-16 18:16:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:16:29.898546712 +0000 UTC m=+20.704299515" watchObservedRunningTime="2026-04-16 18:16:29.899108759 +0000 UTC m=+20.704861561" Apr 16 18:16:29.902250 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:29.902191 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/579176b9-8011-401d-aee2-a97cda1ea10f-kubelet-config\") pod \"global-pull-secret-syncer-7b74c\" (UID: \"579176b9-8011-401d-aee2-a97cda1ea10f\") " pod="kube-system/global-pull-secret-syncer-7b74c" Apr 16 18:16:29.902345 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:29.902286 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/579176b9-8011-401d-aee2-a97cda1ea10f-dbus\") pod \"global-pull-secret-syncer-7b74c\" (UID: \"579176b9-8011-401d-aee2-a97cda1ea10f\") " pod="kube-system/global-pull-secret-syncer-7b74c" Apr 16 18:16:29.902345 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:29.902329 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/579176b9-8011-401d-aee2-a97cda1ea10f-original-pull-secret\") pod \"global-pull-secret-syncer-7b74c\" (UID: \"579176b9-8011-401d-aee2-a97cda1ea10f\") " pod="kube-system/global-pull-secret-syncer-7b74c" Apr 16 18:16:29.902440 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:29.902340 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/579176b9-8011-401d-aee2-a97cda1ea10f-kubelet-config\") pod \"global-pull-secret-syncer-7b74c\" (UID: \"579176b9-8011-401d-aee2-a97cda1ea10f\") " pod="kube-system/global-pull-secret-syncer-7b74c" Apr 16 18:16:29.902440 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:29.902430 2570 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 18:16:29.902518 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:29.902482 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/579176b9-8011-401d-aee2-a97cda1ea10f-original-pull-secret podName:579176b9-8011-401d-aee2-a97cda1ea10f nodeName:}" failed. No retries permitted until 2026-04-16 18:16:30.402464881 +0000 UTC m=+21.208217672 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/579176b9-8011-401d-aee2-a97cda1ea10f-original-pull-secret") pod "global-pull-secret-syncer-7b74c" (UID: "579176b9-8011-401d-aee2-a97cda1ea10f") : object "kube-system"/"original-pull-secret" not registered Apr 16 18:16:29.902590 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:29.902562 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/579176b9-8011-401d-aee2-a97cda1ea10f-dbus\") pod \"global-pull-secret-syncer-7b74c\" (UID: \"579176b9-8011-401d-aee2-a97cda1ea10f\") " pod="kube-system/global-pull-secret-syncer-7b74c" Apr 16 18:16:29.912379 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:29.912341 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-5z2wk" podStartSLOduration=4.285698381 podStartE2EDuration="20.912325282s" podCreationTimestamp="2026-04-16 18:16:09 +0000 UTC" firstStartedPulling="2026-04-16 18:16:12.312656411 +0000 UTC m=+3.118409190" lastFinishedPulling="2026-04-16 18:16:28.939283293 +0000 UTC m=+19.745036091" observedRunningTime="2026-04-16 18:16:29.911986703 +0000 UTC m=+20.717739542" watchObservedRunningTime="2026-04-16 18:16:29.912325282 +0000 UTC m=+20.718078084" Apr 16 18:16:29.926190 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:29.926147 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-65f6j" podStartSLOduration=4.217539943 podStartE2EDuration="20.926137271s" podCreationTimestamp="2026-04-16 18:16:09 +0000 UTC" firstStartedPulling="2026-04-16 18:16:12.315298094 +0000 UTC m=+3.121050873" lastFinishedPulling="2026-04-16 18:16:29.023895409 +0000 UTC m=+19.829648201" observedRunningTime="2026-04-16 18:16:29.925705946 +0000 UTC m=+20.731458748" watchObservedRunningTime="2026-04-16 18:16:29.926137271 +0000 UTC m=+20.731890071" Apr 16 18:16:29.939399 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:29.939357 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/konnectivity-agent-jkbvg" podStartSLOduration=3.484752092 podStartE2EDuration="19.939343278s" podCreationTimestamp="2026-04-16 18:16:10 +0000 UTC" firstStartedPulling="2026-04-16 18:16:12.320144038 +0000 UTC m=+3.125896831" lastFinishedPulling="2026-04-16 18:16:28.7747352 +0000 UTC m=+19.580488017" observedRunningTime="2026-04-16 18:16:29.938636606 +0000 UTC m=+20.744389407" watchObservedRunningTime="2026-04-16 18:16:29.939343278 +0000 UTC m=+20.745096080" Apr 16 18:16:29.951727 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:29.951693 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-d4t7h" podStartSLOduration=3.493921562 podStartE2EDuration="19.951684289s" podCreationTimestamp="2026-04-16 18:16:10 +0000 UTC" firstStartedPulling="2026-04-16 18:16:12.316971421 +0000 UTC m=+3.122724211" lastFinishedPulling="2026-04-16 18:16:28.774734144 +0000 UTC m=+19.580486938" observedRunningTime="2026-04-16 18:16:29.951332076 +0000 UTC m=+20.757084878" watchObservedRunningTime="2026-04-16 18:16:29.951684289 +0000 UTC m=+20.757437089" Apr 16 18:16:30.131569 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:30.131532 2570 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock" Apr 16 18:16:30.406517 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:30.406483 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/579176b9-8011-401d-aee2-a97cda1ea10f-original-pull-secret\") pod \"global-pull-secret-syncer-7b74c\" (UID: \"579176b9-8011-401d-aee2-a97cda1ea10f\") " pod="kube-system/global-pull-secret-syncer-7b74c" Apr 16 18:16:30.406740 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:30.406613 2570 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 18:16:30.406740 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:30.406671 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/579176b9-8011-401d-aee2-a97cda1ea10f-original-pull-secret podName:579176b9-8011-401d-aee2-a97cda1ea10f nodeName:}" failed. No retries permitted until 2026-04-16 18:16:31.406654832 +0000 UTC m=+22.212407614 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/579176b9-8011-401d-aee2-a97cda1ea10f-original-pull-secret") pod "global-pull-secret-syncer-7b74c" (UID: "579176b9-8011-401d-aee2-a97cda1ea10f") : object "kube-system"/"original-pull-secret" not registered Apr 16 18:16:30.718163 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:30.718026 2570 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock","Timestamp":"2026-04-16T18:16:30.131551009Z","UUID":"922774c3-6b8c-4ab8-8904-6669c318c429","Handler":null,"Name":"","Endpoint":""} Apr 16 18:16:30.719509 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:30.719489 2570 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: ebs.csi.aws.com endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock versions: 1.0.0 Apr 16 18:16:30.719509 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:30.719513 2570 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: ebs.csi.aws.com at endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock Apr 16 18:16:30.777150 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:30.777116 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hgcdt" Apr 16 18:16:30.777311 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:30.777260 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hgcdt" podUID="5af0e6ec-389a-47dd-afc0-725b505e4635" Apr 16 18:16:30.777311 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:30.777116 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fzg6h" Apr 16 18:16:30.777436 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:30.777354 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fzg6h" podUID="6d72f360-ffda-4447-8b43-c1059ff81bf3" Apr 16 18:16:30.900311 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:30.900272 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8bzs6" event={"ID":"1abad187-e043-49a6-9671-1535ec064d28","Type":"ContainerStarted","Data":"949b06f1170c71203ac02f6523c0563e66432cf11d49ecaebb59629c6e0867f8"} Apr 16 18:16:30.901835 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:30.901805 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jc6x" event={"ID":"1b14fadb-4a71-439d-84de-91c5c3e29811","Type":"ContainerStarted","Data":"71b3202caa9869eb36d28e37bdb751b15abb184a95337d9451a7d1a1500b6490"} Apr 16 18:16:30.915779 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:30.915735 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/iptables-alerter-5jc6x" podStartSLOduration=5.298320751 podStartE2EDuration="21.915721843s" podCreationTimestamp="2026-04-16 18:16:09 +0000 UTC" firstStartedPulling="2026-04-16 18:16:12.321882907 +0000 UTC m=+3.127635691" lastFinishedPulling="2026-04-16 18:16:28.939284 +0000 UTC m=+19.745036783" observedRunningTime="2026-04-16 18:16:30.915351935 +0000 UTC m=+21.721104736" watchObservedRunningTime="2026-04-16 18:16:30.915721843 +0000 UTC m=+21.721474644" Apr 16 18:16:31.414252 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:31.414151 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/579176b9-8011-401d-aee2-a97cda1ea10f-original-pull-secret\") pod \"global-pull-secret-syncer-7b74c\" (UID: \"579176b9-8011-401d-aee2-a97cda1ea10f\") " pod="kube-system/global-pull-secret-syncer-7b74c" Apr 16 18:16:31.414478 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:31.414313 2570 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 18:16:31.414478 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:31.414384 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/579176b9-8011-401d-aee2-a97cda1ea10f-original-pull-secret podName:579176b9-8011-401d-aee2-a97cda1ea10f nodeName:}" failed. No retries permitted until 2026-04-16 18:16:33.414367595 +0000 UTC m=+24.220120374 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/579176b9-8011-401d-aee2-a97cda1ea10f-original-pull-secret") pod "global-pull-secret-syncer-7b74c" (UID: "579176b9-8011-401d-aee2-a97cda1ea10f") : object "kube-system"/"original-pull-secret" not registered Apr 16 18:16:31.778010 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:31.777829 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-7b74c" Apr 16 18:16:31.778165 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:31.778136 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-7b74c" podUID="579176b9-8011-401d-aee2-a97cda1ea10f" Apr 16 18:16:31.905405 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:31.905370 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8bzs6" event={"ID":"1abad187-e043-49a6-9671-1535ec064d28","Type":"ContainerStarted","Data":"207831db1268c07c9cf518f64ecfb497fb7c2cd262cdf914c79bbed339c4885b"} Apr 16 18:16:31.908045 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:31.908024 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hls95_9eda7e8d-1d99-41d3-acfb-b6c80829811c/ovn-acl-logging/0.log" Apr 16 18:16:31.908475 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:31.908422 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hls95" event={"ID":"9eda7e8d-1d99-41d3-acfb-b6c80829811c","Type":"ContainerStarted","Data":"7cbf5edaef3a58f468aa7ecf77f14dcd7d06135a7f6819ffbb7c83956c4133a7"} Apr 16 18:16:31.935519 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:31.935482 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8bzs6" podStartSLOduration=3.19683078 podStartE2EDuration="21.935468276s" podCreationTimestamp="2026-04-16 18:16:10 +0000 UTC" firstStartedPulling="2026-04-16 18:16:12.319146197 +0000 UTC m=+3.124898982" lastFinishedPulling="2026-04-16 18:16:31.057783699 +0000 UTC m=+21.863536478" observedRunningTime="2026-04-16 18:16:31.935256344 +0000 UTC m=+22.741009142" watchObservedRunningTime="2026-04-16 18:16:31.935468276 +0000 UTC m=+22.741221077" Apr 16 18:16:32.253512 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:32.253477 2570 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/konnectivity-agent-jkbvg" Apr 16 18:16:32.254452 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:32.254434 2570 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/konnectivity-agent-jkbvg" Apr 16 18:16:32.777517 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:32.777484 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fzg6h" Apr 16 18:16:32.777712 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:32.777486 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hgcdt" Apr 16 18:16:32.777712 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:32.777588 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fzg6h" podUID="6d72f360-ffda-4447-8b43-c1059ff81bf3" Apr 16 18:16:32.777712 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:32.777669 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hgcdt" podUID="5af0e6ec-389a-47dd-afc0-725b505e4635" Apr 16 18:16:33.431681 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:33.431643 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/579176b9-8011-401d-aee2-a97cda1ea10f-original-pull-secret\") pod \"global-pull-secret-syncer-7b74c\" (UID: \"579176b9-8011-401d-aee2-a97cda1ea10f\") " pod="kube-system/global-pull-secret-syncer-7b74c" Apr 16 18:16:33.432204 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:33.431915 2570 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 18:16:33.432204 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:33.432009 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/579176b9-8011-401d-aee2-a97cda1ea10f-original-pull-secret podName:579176b9-8011-401d-aee2-a97cda1ea10f nodeName:}" failed. No retries permitted until 2026-04-16 18:16:37.431972438 +0000 UTC m=+28.237725222 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/579176b9-8011-401d-aee2-a97cda1ea10f-original-pull-secret") pod "global-pull-secret-syncer-7b74c" (UID: "579176b9-8011-401d-aee2-a97cda1ea10f") : object "kube-system"/"original-pull-secret" not registered Apr 16 18:16:33.777357 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:33.777282 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-7b74c" Apr 16 18:16:33.777490 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:33.777392 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-7b74c" podUID="579176b9-8011-401d-aee2-a97cda1ea10f" Apr 16 18:16:34.777935 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:34.777900 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fzg6h" Apr 16 18:16:34.778462 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:34.778034 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fzg6h" podUID="6d72f360-ffda-4447-8b43-c1059ff81bf3" Apr 16 18:16:34.778462 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:34.778045 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hgcdt" Apr 16 18:16:34.778462 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:34.778175 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hgcdt" podUID="5af0e6ec-389a-47dd-afc0-725b505e4635" Apr 16 18:16:35.777257 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:35.777086 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-7b74c" Apr 16 18:16:35.777366 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:35.777349 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-7b74c" podUID="579176b9-8011-401d-aee2-a97cda1ea10f" Apr 16 18:16:35.917816 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:35.917786 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lxx4t" event={"ID":"259c30de-27f1-414c-b384-b90b6e241cd8","Type":"ContainerStarted","Data":"c8248095c2e038a0ad08bab254007cffd1350ee2917973a2aec37590329667a3"} Apr 16 18:16:35.920637 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:35.920618 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hls95_9eda7e8d-1d99-41d3-acfb-b6c80829811c/ovn-acl-logging/0.log" Apr 16 18:16:35.920914 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:35.920892 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hls95" event={"ID":"9eda7e8d-1d99-41d3-acfb-b6c80829811c","Type":"ContainerStarted","Data":"7bb083c3b0fe71dfa35f9026f27a27013791d023e2b0a7b84a62708f3ca18d4d"} Apr 16 18:16:35.921243 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:35.921204 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:35.921390 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:35.921252 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:35.921390 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:35.921322 2570 scope.go:117] "RemoveContainer" containerID="3a8a8a2e41ce17d633abc74d6a3bdda2b6664f5a59779b870162652c0efa993c" Apr 16 18:16:35.935523 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:35.935506 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:36.777840 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:36.777808 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fzg6h" Apr 16 18:16:36.778065 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:36.777808 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hgcdt" Apr 16 18:16:36.778065 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:36.777904 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fzg6h" podUID="6d72f360-ffda-4447-8b43-c1059ff81bf3" Apr 16 18:16:36.778065 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:36.777974 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hgcdt" podUID="5af0e6ec-389a-47dd-afc0-725b505e4635" Apr 16 18:16:36.924556 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:36.924520 2570 generic.go:358] "Generic (PLEG): container finished" podID="259c30de-27f1-414c-b384-b90b6e241cd8" containerID="c8248095c2e038a0ad08bab254007cffd1350ee2917973a2aec37590329667a3" exitCode=0 Apr 16 18:16:36.924962 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:36.924631 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lxx4t" event={"ID":"259c30de-27f1-414c-b384-b90b6e241cd8","Type":"ContainerDied","Data":"c8248095c2e038a0ad08bab254007cffd1350ee2917973a2aec37590329667a3"} Apr 16 18:16:36.930388 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:36.930363 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hls95_9eda7e8d-1d99-41d3-acfb-b6c80829811c/ovn-acl-logging/0.log" Apr 16 18:16:36.930730 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:36.930702 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hls95" event={"ID":"9eda7e8d-1d99-41d3-acfb-b6c80829811c","Type":"ContainerStarted","Data":"dd774b5ca4da006dc1722bb22251c16fc014a074befccb0094cb13a665931fb3"} Apr 16 18:16:36.931041 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:36.931025 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:36.947536 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:36.947511 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:16:36.984780 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:36.984734 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hls95" podStartSLOduration=11.265070599 podStartE2EDuration="27.984719918s" podCreationTimestamp="2026-04-16 18:16:09 +0000 UTC" firstStartedPulling="2026-04-16 18:16:12.307596321 +0000 UTC m=+3.113349102" lastFinishedPulling="2026-04-16 18:16:29.027245622 +0000 UTC m=+19.832998421" observedRunningTime="2026-04-16 18:16:36.983380562 +0000 UTC m=+27.789133375" watchObservedRunningTime="2026-04-16 18:16:36.984719918 +0000 UTC m=+27.790472719" Apr 16 18:16:37.375867 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:37.375831 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-fzg6h"] Apr 16 18:16:37.376032 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:37.375924 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fzg6h" Apr 16 18:16:37.376032 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:37.376001 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fzg6h" podUID="6d72f360-ffda-4447-8b43-c1059ff81bf3" Apr 16 18:16:37.378321 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:37.378296 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hgcdt"] Apr 16 18:16:37.378433 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:37.378382 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hgcdt" Apr 16 18:16:37.378481 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:37.378463 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hgcdt" podUID="5af0e6ec-389a-47dd-afc0-725b505e4635" Apr 16 18:16:37.388155 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:37.388127 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-7b74c"] Apr 16 18:16:37.388336 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:37.388264 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-7b74c" Apr 16 18:16:37.388446 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:37.388365 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-7b74c" podUID="579176b9-8011-401d-aee2-a97cda1ea10f" Apr 16 18:16:37.462702 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:37.462669 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/579176b9-8011-401d-aee2-a97cda1ea10f-original-pull-secret\") pod \"global-pull-secret-syncer-7b74c\" (UID: \"579176b9-8011-401d-aee2-a97cda1ea10f\") " pod="kube-system/global-pull-secret-syncer-7b74c" Apr 16 18:16:37.462856 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:37.462791 2570 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 18:16:37.462856 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:37.462849 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/579176b9-8011-401d-aee2-a97cda1ea10f-original-pull-secret podName:579176b9-8011-401d-aee2-a97cda1ea10f nodeName:}" failed. No retries permitted until 2026-04-16 18:16:45.462835042 +0000 UTC m=+36.268587826 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/579176b9-8011-401d-aee2-a97cda1ea10f-original-pull-secret") pod "global-pull-secret-syncer-7b74c" (UID: "579176b9-8011-401d-aee2-a97cda1ea10f") : object "kube-system"/"original-pull-secret" not registered Apr 16 18:16:37.934378 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:37.934348 2570 generic.go:358] "Generic (PLEG): container finished" podID="259c30de-27f1-414c-b384-b90b6e241cd8" containerID="4f81e5902fa6048afbc75251928d46d9323178a06d74ca05ed8efa46a13fbdbb" exitCode=0 Apr 16 18:16:37.934754 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:37.934418 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lxx4t" event={"ID":"259c30de-27f1-414c-b384-b90b6e241cd8","Type":"ContainerDied","Data":"4f81e5902fa6048afbc75251928d46d9323178a06d74ca05ed8efa46a13fbdbb"} Apr 16 18:16:38.777497 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:38.777293 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hgcdt" Apr 16 18:16:38.777661 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:38.777335 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-7b74c" Apr 16 18:16:38.777661 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:38.777582 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hgcdt" podUID="5af0e6ec-389a-47dd-afc0-725b505e4635" Apr 16 18:16:38.777661 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:38.777637 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-7b74c" podUID="579176b9-8011-401d-aee2-a97cda1ea10f" Apr 16 18:16:38.938269 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:38.938169 2570 generic.go:358] "Generic (PLEG): container finished" podID="259c30de-27f1-414c-b384-b90b6e241cd8" containerID="2059833f06a4fd130bc432b6b379ab2f2914cc1b694de046062db71714cb99f1" exitCode=0 Apr 16 18:16:38.938705 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:38.938263 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lxx4t" event={"ID":"259c30de-27f1-414c-b384-b90b6e241cd8","Type":"ContainerDied","Data":"2059833f06a4fd130bc432b6b379ab2f2914cc1b694de046062db71714cb99f1"} Apr 16 18:16:39.777907 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:39.777871 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fzg6h" Apr 16 18:16:39.778078 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:39.777959 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fzg6h" podUID="6d72f360-ffda-4447-8b43-c1059ff81bf3" Apr 16 18:16:40.777286 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:40.777252 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-7b74c" Apr 16 18:16:40.777829 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:40.777259 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hgcdt" Apr 16 18:16:40.777829 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:40.777380 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-7b74c" podUID="579176b9-8011-401d-aee2-a97cda1ea10f" Apr 16 18:16:40.777829 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:40.777479 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hgcdt" podUID="5af0e6ec-389a-47dd-afc0-725b505e4635" Apr 16 18:16:41.612644 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:41.612609 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kube-system/konnectivity-agent-jkbvg" Apr 16 18:16:41.612835 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:41.612769 2570 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 18:16:41.613311 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:41.613283 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/konnectivity-agent-jkbvg" Apr 16 18:16:41.777392 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:41.777361 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fzg6h" Apr 16 18:16:41.777871 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:41.777497 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fzg6h" podUID="6d72f360-ffda-4447-8b43-c1059ff81bf3" Apr 16 18:16:42.027319 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.027287 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-141-219.ec2.internal" event="NodeReady" Apr 16 18:16:42.027490 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.027419 2570 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Apr 16 18:16:42.069258 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.069123 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-844ff7bf89-wscd9"] Apr 16 18:16:42.110523 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.110470 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-56ddbf785c-smgfg"] Apr 16 18:16:42.110728 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.110643 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" Apr 16 18:16:42.113246 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.113027 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Apr 16 18:16:42.113246 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.113102 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Apr 16 18:16:42.114139 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.113895 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-qqz6d\"" Apr 16 18:16:42.114139 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.113947 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-private-configuration\"" Apr 16 18:16:42.122854 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.122835 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Apr 16 18:16:42.137022 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.136995 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-5cb6cf4cb4-297lb"] Apr 16 18:16:42.137132 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.137114 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-56ddbf785c-smgfg" Apr 16 18:16:42.140718 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.140696 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"open-cluster-management-agent-addon\"/\"openshift-service-ca.crt\"" Apr 16 18:16:42.140718 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.140705 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"open-cluster-management-image-pull-credentials\"" Apr 16 18:16:42.140874 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.140705 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"managed-serviceaccount-dockercfg-zxs9h\"" Apr 16 18:16:42.141606 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.141587 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"open-cluster-management-agent-addon\"/\"kube-root-ca.crt\"" Apr 16 18:16:42.141692 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.141634 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"managed-serviceaccount-hub-kubeconfig\"" Apr 16 18:16:42.155386 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.155370 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-75cc7fdf4-7cnk6"] Apr 16 18:16:42.155506 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.155492 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-297lb" Apr 16 18:16:42.157672 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.157644 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Apr 16 18:16:42.157767 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.157752 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Apr 16 18:16:42.157993 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.157978 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"default-dockercfg-cdbck\"" Apr 16 18:16:42.172326 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.172310 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["open-cluster-management-agent-addon/cluster-proxy-proxy-agent-d96fff796-nlgnl"] Apr 16 18:16:42.172454 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.172439 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-75cc7fdf4-7cnk6" Apr 16 18:16:42.174600 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.174583 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"work-manager-hub-kubeconfig\"" Apr 16 18:16:42.196470 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.196449 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-gd5b4"] Apr 16 18:16:42.196590 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.196576 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-d96fff796-nlgnl" Apr 16 18:16:42.198583 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.198560 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-image-registry-private-configuration\") pod \"image-registry-844ff7bf89-wscd9\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" Apr 16 18:16:42.198679 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.198601 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-ca-trust-extracted\") pod \"image-registry-844ff7bf89-wscd9\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" Apr 16 18:16:42.198679 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.198628 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-installation-pull-secrets\") pod \"image-registry-844ff7bf89-wscd9\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" Apr 16 18:16:42.198679 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.198658 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-bound-sa-token\") pod \"image-registry-844ff7bf89-wscd9\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" Apr 16 18:16:42.198840 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.198741 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7bdv\" (UniqueName: \"kubernetes.io/projected/1d10439d-2d76-4276-9780-19e404a46b29-kube-api-access-t7bdv\") pod \"managed-serviceaccount-addon-agent-56ddbf785c-smgfg\" (UID: \"1d10439d-2d76-4276-9780-19e404a46b29\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-56ddbf785c-smgfg" Apr 16 18:16:42.198840 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.198778 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-registry-tls\") pod \"image-registry-844ff7bf89-wscd9\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" Apr 16 18:16:42.198840 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.198793 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-open-cluster-management.io-proxy-agent-signer-client-cert\"" Apr 16 18:16:42.198840 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.198805 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-trusted-ca\") pod \"image-registry-844ff7bf89-wscd9\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" Apr 16 18:16:42.198840 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.198806 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-hub-kubeconfig\"" Apr 16 18:16:42.199088 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.198818 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-service-proxy-server-certificates\"" Apr 16 18:16:42.199088 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.198842 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc4b5\" (UniqueName: \"kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-kube-api-access-nc4b5\") pod \"image-registry-844ff7bf89-wscd9\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" Apr 16 18:16:42.199088 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.198897 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/1d10439d-2d76-4276-9780-19e404a46b29-hub-kubeconfig\") pod \"managed-serviceaccount-addon-agent-56ddbf785c-smgfg\" (UID: \"1d10439d-2d76-4276-9780-19e404a46b29\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-56ddbf785c-smgfg" Apr 16 18:16:42.199088 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.198922 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-ca\"" Apr 16 18:16:42.199088 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.198927 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-registry-certificates\") pod \"image-registry-844ff7bf89-wscd9\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" Apr 16 18:16:42.216879 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.216854 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-5cb6cf4cb4-297lb"] Apr 16 18:16:42.216879 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.216879 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-56ddbf785c-smgfg"] Apr 16 18:16:42.216991 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.216897 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-tblwn"] Apr 16 18:16:42.216991 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.216968 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gd5b4" Apr 16 18:16:42.219242 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.219211 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Apr 16 18:16:42.219470 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.219450 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Apr 16 18:16:42.219533 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.219491 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-gz4g7\"" Apr 16 18:16:42.238887 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.238866 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-844ff7bf89-wscd9"] Apr 16 18:16:42.238887 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.238889 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-gd5b4"] Apr 16 18:16:42.238998 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.238903 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-tblwn"] Apr 16 18:16:42.238998 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.238915 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-75cc7fdf4-7cnk6"] Apr 16 18:16:42.238998 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.238927 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/cluster-proxy-proxy-agent-d96fff796-nlgnl"] Apr 16 18:16:42.238998 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.238967 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-tblwn" Apr 16 18:16:42.241241 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.241213 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Apr 16 18:16:42.241327 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.241247 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-4b92n\"" Apr 16 18:16:42.241327 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.241311 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Apr 16 18:16:42.241327 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.241324 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Apr 16 18:16:42.300273 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.300193 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hz64\" (UniqueName: \"kubernetes.io/projected/804432d4-ca03-499d-9807-b4521b7b7b63-kube-api-access-6hz64\") pod \"klusterlet-addon-workmgr-75cc7fdf4-7cnk6\" (UID: \"804432d4-ca03-499d-9807-b4521b7b7b63\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-75cc7fdf4-7cnk6" Apr 16 18:16:42.300273 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.300253 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-registry-tls\") pod \"image-registry-844ff7bf89-wscd9\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" Apr 16 18:16:42.300461 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.300296 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/90976564-8bbb-407b-a345-f362c0c02c2d-nginx-conf\") pod \"networking-console-plugin-5cb6cf4cb4-297lb\" (UID: \"90976564-8bbb-407b-a345-f362c0c02c2d\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-297lb" Apr 16 18:16:42.300461 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.300329 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hub\" (UniqueName: \"kubernetes.io/secret/d3e94ba9-6d0d-433f-bf55-90ff5b6dd994-hub\") pod \"cluster-proxy-proxy-agent-d96fff796-nlgnl\" (UID: \"d3e94ba9-6d0d-433f-bf55-90ff5b6dd994\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-d96fff796-nlgnl" Apr 16 18:16:42.300461 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.300346 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vst4f\" (UniqueName: \"kubernetes.io/projected/be0027a7-e3ae-4c79-8020-883f6b6eda09-kube-api-access-vst4f\") pod \"dns-default-gd5b4\" (UID: \"be0027a7-e3ae-4c79-8020-883f6b6eda09\") " pod="openshift-dns/dns-default-gd5b4" Apr 16 18:16:42.300461 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:42.300361 2570 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 18:16:42.300461 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:42.300383 2570 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-844ff7bf89-wscd9: secret "image-registry-tls" not found Apr 16 18:16:42.300461 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.300404 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca\" (UniqueName: \"kubernetes.io/secret/d3e94ba9-6d0d-433f-bf55-90ff5b6dd994-ca\") pod \"cluster-proxy-proxy-agent-d96fff796-nlgnl\" (UID: \"d3e94ba9-6d0d-433f-bf55-90ff5b6dd994\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-d96fff796-nlgnl" Apr 16 18:16:42.300461 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:42.300452 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-registry-tls podName:d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd nodeName:}" failed. No retries permitted until 2026-04-16 18:16:42.800430065 +0000 UTC m=+33.606182847 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-registry-tls") pod "image-registry-844ff7bf89-wscd9" (UID: "d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd") : secret "image-registry-tls" not found Apr 16 18:16:42.300724 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.300524 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/63acc75e-52de-45b3-a91a-8c41889d9a55-cert\") pod \"ingress-canary-tblwn\" (UID: \"63acc75e-52de-45b3-a91a-8c41889d9a55\") " pod="openshift-ingress-canary/ingress-canary-tblwn" Apr 16 18:16:42.300724 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.300553 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/be0027a7-e3ae-4c79-8020-883f6b6eda09-tmp-dir\") pod \"dns-default-gd5b4\" (UID: \"be0027a7-e3ae-4c79-8020-883f6b6eda09\") " pod="openshift-dns/dns-default-gd5b4" Apr 16 18:16:42.300724 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.300595 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-ca-trust-extracted\") pod \"image-registry-844ff7bf89-wscd9\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" Apr 16 18:16:42.300724 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.300619 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-installation-pull-secrets\") pod \"image-registry-844ff7bf89-wscd9\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" Apr 16 18:16:42.300724 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.300643 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nc4b5\" (UniqueName: \"kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-kube-api-access-nc4b5\") pod \"image-registry-844ff7bf89-wscd9\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" Apr 16 18:16:42.300724 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.300684 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-bound-sa-token\") pod \"image-registry-844ff7bf89-wscd9\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" Apr 16 18:16:42.300724 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.300715 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t7bdv\" (UniqueName: \"kubernetes.io/projected/1d10439d-2d76-4276-9780-19e404a46b29-kube-api-access-t7bdv\") pod \"managed-serviceaccount-addon-agent-56ddbf785c-smgfg\" (UID: \"1d10439d-2d76-4276-9780-19e404a46b29\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-56ddbf785c-smgfg" Apr 16 18:16:42.301043 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.300743 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/804432d4-ca03-499d-9807-b4521b7b7b63-klusterlet-config\") pod \"klusterlet-addon-workmgr-75cc7fdf4-7cnk6\" (UID: \"804432d4-ca03-499d-9807-b4521b7b7b63\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-75cc7fdf4-7cnk6" Apr 16 18:16:42.301043 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.300769 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/d3e94ba9-6d0d-433f-bf55-90ff5b6dd994-hub-kubeconfig\") pod \"cluster-proxy-proxy-agent-d96fff796-nlgnl\" (UID: \"d3e94ba9-6d0d-433f-bf55-90ff5b6dd994\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-d96fff796-nlgnl" Apr 16 18:16:42.301043 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.300798 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-trusted-ca\") pod \"image-registry-844ff7bf89-wscd9\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" Apr 16 18:16:42.301043 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.300828 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/1d10439d-2d76-4276-9780-19e404a46b29-hub-kubeconfig\") pod \"managed-serviceaccount-addon-agent-56ddbf785c-smgfg\" (UID: \"1d10439d-2d76-4276-9780-19e404a46b29\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-56ddbf785c-smgfg" Apr 16 18:16:42.301043 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.300858 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-registry-certificates\") pod \"image-registry-844ff7bf89-wscd9\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" Apr 16 18:16:42.301043 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.300886 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/be0027a7-e3ae-4c79-8020-883f6b6eda09-metrics-tls\") pod \"dns-default-gd5b4\" (UID: \"be0027a7-e3ae-4c79-8020-883f6b6eda09\") " pod="openshift-dns/dns-default-gd5b4" Apr 16 18:16:42.301043 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.300911 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/804432d4-ca03-499d-9807-b4521b7b7b63-tmp\") pod \"klusterlet-addon-workmgr-75cc7fdf4-7cnk6\" (UID: \"804432d4-ca03-499d-9807-b4521b7b7b63\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-75cc7fdf4-7cnk6" Apr 16 18:16:42.301043 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.300943 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be0027a7-e3ae-4c79-8020-883f6b6eda09-config-volume\") pod \"dns-default-gd5b4\" (UID: \"be0027a7-e3ae-4c79-8020-883f6b6eda09\") " pod="openshift-dns/dns-default-gd5b4" Apr 16 18:16:42.301043 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.300973 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzrtb\" (UniqueName: \"kubernetes.io/projected/63acc75e-52de-45b3-a91a-8c41889d9a55-kube-api-access-kzrtb\") pod \"ingress-canary-tblwn\" (UID: \"63acc75e-52de-45b3-a91a-8c41889d9a55\") " pod="openshift-ingress-canary/ingress-canary-tblwn" Apr 16 18:16:42.301043 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.301000 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64zmv\" (UniqueName: \"kubernetes.io/projected/d3e94ba9-6d0d-433f-bf55-90ff5b6dd994-kube-api-access-64zmv\") pod \"cluster-proxy-proxy-agent-d96fff796-nlgnl\" (UID: \"d3e94ba9-6d0d-433f-bf55-90ff5b6dd994\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-d96fff796-nlgnl" Apr 16 18:16:42.301043 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.301029 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-ca-trust-extracted\") pod \"image-registry-844ff7bf89-wscd9\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" Apr 16 18:16:42.301584 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.301174 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/90976564-8bbb-407b-a345-f362c0c02c2d-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-297lb\" (UID: \"90976564-8bbb-407b-a345-f362c0c02c2d\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-297lb" Apr 16 18:16:42.301584 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.301279 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ocpservice-ca\" (UniqueName: \"kubernetes.io/configmap/d3e94ba9-6d0d-433f-bf55-90ff5b6dd994-ocpservice-ca\") pod \"cluster-proxy-proxy-agent-d96fff796-nlgnl\" (UID: \"d3e94ba9-6d0d-433f-bf55-90ff5b6dd994\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-d96fff796-nlgnl" Apr 16 18:16:42.301584 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.301346 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-image-registry-private-configuration\") pod \"image-registry-844ff7bf89-wscd9\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" Apr 16 18:16:42.301584 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.301521 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-proxy-server-cert\" (UniqueName: \"kubernetes.io/secret/d3e94ba9-6d0d-433f-bf55-90ff5b6dd994-service-proxy-server-cert\") pod \"cluster-proxy-proxy-agent-d96fff796-nlgnl\" (UID: \"d3e94ba9-6d0d-433f-bf55-90ff5b6dd994\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-d96fff796-nlgnl" Apr 16 18:16:42.302138 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.301993 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-trusted-ca\") pod \"image-registry-844ff7bf89-wscd9\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" Apr 16 18:16:42.302138 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.302090 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-registry-certificates\") pod \"image-registry-844ff7bf89-wscd9\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" Apr 16 18:16:42.305420 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.305395 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-installation-pull-secrets\") pod \"image-registry-844ff7bf89-wscd9\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" Apr 16 18:16:42.305533 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.305508 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-image-registry-private-configuration\") pod \"image-registry-844ff7bf89-wscd9\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" Apr 16 18:16:42.305671 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.305646 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/1d10439d-2d76-4276-9780-19e404a46b29-hub-kubeconfig\") pod \"managed-serviceaccount-addon-agent-56ddbf785c-smgfg\" (UID: \"1d10439d-2d76-4276-9780-19e404a46b29\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-56ddbf785c-smgfg" Apr 16 18:16:42.309980 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.309925 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-bound-sa-token\") pod \"image-registry-844ff7bf89-wscd9\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" Apr 16 18:16:42.313981 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.313952 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc4b5\" (UniqueName: \"kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-kube-api-access-nc4b5\") pod \"image-registry-844ff7bf89-wscd9\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" Apr 16 18:16:42.314382 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.314363 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7bdv\" (UniqueName: \"kubernetes.io/projected/1d10439d-2d76-4276-9780-19e404a46b29-kube-api-access-t7bdv\") pod \"managed-serviceaccount-addon-agent-56ddbf785c-smgfg\" (UID: \"1d10439d-2d76-4276-9780-19e404a46b29\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-56ddbf785c-smgfg" Apr 16 18:16:42.401987 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.401945 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-64zmv\" (UniqueName: \"kubernetes.io/projected/d3e94ba9-6d0d-433f-bf55-90ff5b6dd994-kube-api-access-64zmv\") pod \"cluster-proxy-proxy-agent-d96fff796-nlgnl\" (UID: \"d3e94ba9-6d0d-433f-bf55-90ff5b6dd994\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-d96fff796-nlgnl" Apr 16 18:16:42.401987 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.401983 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/90976564-8bbb-407b-a345-f362c0c02c2d-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-297lb\" (UID: \"90976564-8bbb-407b-a345-f362c0c02c2d\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-297lb" Apr 16 18:16:42.402219 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.402009 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ocpservice-ca\" (UniqueName: \"kubernetes.io/configmap/d3e94ba9-6d0d-433f-bf55-90ff5b6dd994-ocpservice-ca\") pod \"cluster-proxy-proxy-agent-d96fff796-nlgnl\" (UID: \"d3e94ba9-6d0d-433f-bf55-90ff5b6dd994\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-d96fff796-nlgnl" Apr 16 18:16:42.402219 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.402044 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-proxy-server-cert\" (UniqueName: \"kubernetes.io/secret/d3e94ba9-6d0d-433f-bf55-90ff5b6dd994-service-proxy-server-cert\") pod \"cluster-proxy-proxy-agent-d96fff796-nlgnl\" (UID: \"d3e94ba9-6d0d-433f-bf55-90ff5b6dd994\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-d96fff796-nlgnl" Apr 16 18:16:42.402219 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:42.402117 2570 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 16 18:16:42.402219 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.402185 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6hz64\" (UniqueName: \"kubernetes.io/projected/804432d4-ca03-499d-9807-b4521b7b7b63-kube-api-access-6hz64\") pod \"klusterlet-addon-workmgr-75cc7fdf4-7cnk6\" (UID: \"804432d4-ca03-499d-9807-b4521b7b7b63\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-75cc7fdf4-7cnk6" Apr 16 18:16:42.402410 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.402262 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/90976564-8bbb-407b-a345-f362c0c02c2d-nginx-conf\") pod \"networking-console-plugin-5cb6cf4cb4-297lb\" (UID: \"90976564-8bbb-407b-a345-f362c0c02c2d\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-297lb" Apr 16 18:16:42.402410 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:42.402317 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90976564-8bbb-407b-a345-f362c0c02c2d-networking-console-plugin-cert podName:90976564-8bbb-407b-a345-f362c0c02c2d nodeName:}" failed. No retries permitted until 2026-04-16 18:16:42.902289891 +0000 UTC m=+33.708042683 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/90976564-8bbb-407b-a345-f362c0c02c2d-networking-console-plugin-cert") pod "networking-console-plugin-5cb6cf4cb4-297lb" (UID: "90976564-8bbb-407b-a345-f362c0c02c2d") : secret "networking-console-plugin-cert" not found Apr 16 18:16:42.402410 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.402347 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hub\" (UniqueName: \"kubernetes.io/secret/d3e94ba9-6d0d-433f-bf55-90ff5b6dd994-hub\") pod \"cluster-proxy-proxy-agent-d96fff796-nlgnl\" (UID: \"d3e94ba9-6d0d-433f-bf55-90ff5b6dd994\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-d96fff796-nlgnl" Apr 16 18:16:42.402410 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.402377 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vst4f\" (UniqueName: \"kubernetes.io/projected/be0027a7-e3ae-4c79-8020-883f6b6eda09-kube-api-access-vst4f\") pod \"dns-default-gd5b4\" (UID: \"be0027a7-e3ae-4c79-8020-883f6b6eda09\") " pod="openshift-dns/dns-default-gd5b4" Apr 16 18:16:42.402614 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.402409 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca\" (UniqueName: \"kubernetes.io/secret/d3e94ba9-6d0d-433f-bf55-90ff5b6dd994-ca\") pod \"cluster-proxy-proxy-agent-d96fff796-nlgnl\" (UID: \"d3e94ba9-6d0d-433f-bf55-90ff5b6dd994\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-d96fff796-nlgnl" Apr 16 18:16:42.402614 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.402447 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/63acc75e-52de-45b3-a91a-8c41889d9a55-cert\") pod \"ingress-canary-tblwn\" (UID: \"63acc75e-52de-45b3-a91a-8c41889d9a55\") " pod="openshift-ingress-canary/ingress-canary-tblwn" Apr 16 18:16:42.402614 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.402473 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/be0027a7-e3ae-4c79-8020-883f6b6eda09-tmp-dir\") pod \"dns-default-gd5b4\" (UID: \"be0027a7-e3ae-4c79-8020-883f6b6eda09\") " pod="openshift-dns/dns-default-gd5b4" Apr 16 18:16:42.402614 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.402544 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/804432d4-ca03-499d-9807-b4521b7b7b63-klusterlet-config\") pod \"klusterlet-addon-workmgr-75cc7fdf4-7cnk6\" (UID: \"804432d4-ca03-499d-9807-b4521b7b7b63\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-75cc7fdf4-7cnk6" Apr 16 18:16:42.402614 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.402572 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/d3e94ba9-6d0d-433f-bf55-90ff5b6dd994-hub-kubeconfig\") pod \"cluster-proxy-proxy-agent-d96fff796-nlgnl\" (UID: \"d3e94ba9-6d0d-433f-bf55-90ff5b6dd994\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-d96fff796-nlgnl" Apr 16 18:16:42.402921 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.402634 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/be0027a7-e3ae-4c79-8020-883f6b6eda09-metrics-tls\") pod \"dns-default-gd5b4\" (UID: \"be0027a7-e3ae-4c79-8020-883f6b6eda09\") " pod="openshift-dns/dns-default-gd5b4" Apr 16 18:16:42.402921 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.402661 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/804432d4-ca03-499d-9807-b4521b7b7b63-tmp\") pod \"klusterlet-addon-workmgr-75cc7fdf4-7cnk6\" (UID: \"804432d4-ca03-499d-9807-b4521b7b7b63\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-75cc7fdf4-7cnk6" Apr 16 18:16:42.402921 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.402695 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be0027a7-e3ae-4c79-8020-883f6b6eda09-config-volume\") pod \"dns-default-gd5b4\" (UID: \"be0027a7-e3ae-4c79-8020-883f6b6eda09\") " pod="openshift-dns/dns-default-gd5b4" Apr 16 18:16:42.402921 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.402727 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kzrtb\" (UniqueName: \"kubernetes.io/projected/63acc75e-52de-45b3-a91a-8c41889d9a55-kube-api-access-kzrtb\") pod \"ingress-canary-tblwn\" (UID: \"63acc75e-52de-45b3-a91a-8c41889d9a55\") " pod="openshift-ingress-canary/ingress-canary-tblwn" Apr 16 18:16:42.402921 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.402859 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ocpservice-ca\" (UniqueName: \"kubernetes.io/configmap/d3e94ba9-6d0d-433f-bf55-90ff5b6dd994-ocpservice-ca\") pod \"cluster-proxy-proxy-agent-d96fff796-nlgnl\" (UID: \"d3e94ba9-6d0d-433f-bf55-90ff5b6dd994\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-d96fff796-nlgnl" Apr 16 18:16:42.403150 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:42.402930 2570 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 18:16:42.403150 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.402961 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/90976564-8bbb-407b-a345-f362c0c02c2d-nginx-conf\") pod \"networking-console-plugin-5cb6cf4cb4-297lb\" (UID: \"90976564-8bbb-407b-a345-f362c0c02c2d\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-297lb" Apr 16 18:16:42.403150 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:42.402959 2570 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 18:16:42.403150 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:42.402982 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63acc75e-52de-45b3-a91a-8c41889d9a55-cert podName:63acc75e-52de-45b3-a91a-8c41889d9a55 nodeName:}" failed. No retries permitted until 2026-04-16 18:16:42.902964887 +0000 UTC m=+33.708717666 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/63acc75e-52de-45b3-a91a-8c41889d9a55-cert") pod "ingress-canary-tblwn" (UID: "63acc75e-52de-45b3-a91a-8c41889d9a55") : secret "canary-serving-cert" not found Apr 16 18:16:42.403150 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:42.403035 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be0027a7-e3ae-4c79-8020-883f6b6eda09-metrics-tls podName:be0027a7-e3ae-4c79-8020-883f6b6eda09 nodeName:}" failed. No retries permitted until 2026-04-16 18:16:42.903025694 +0000 UTC m=+33.708778473 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/be0027a7-e3ae-4c79-8020-883f6b6eda09-metrics-tls") pod "dns-default-gd5b4" (UID: "be0027a7-e3ae-4c79-8020-883f6b6eda09") : secret "dns-default-metrics-tls" not found Apr 16 18:16:42.403437 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.403323 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/804432d4-ca03-499d-9807-b4521b7b7b63-tmp\") pod \"klusterlet-addon-workmgr-75cc7fdf4-7cnk6\" (UID: \"804432d4-ca03-499d-9807-b4521b7b7b63\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-75cc7fdf4-7cnk6" Apr 16 18:16:42.405217 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.405189 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hub\" (UniqueName: \"kubernetes.io/secret/d3e94ba9-6d0d-433f-bf55-90ff5b6dd994-hub\") pod \"cluster-proxy-proxy-agent-d96fff796-nlgnl\" (UID: \"d3e94ba9-6d0d-433f-bf55-90ff5b6dd994\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-d96fff796-nlgnl" Apr 16 18:16:42.405346 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.405198 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-proxy-server-cert\" (UniqueName: \"kubernetes.io/secret/d3e94ba9-6d0d-433f-bf55-90ff5b6dd994-service-proxy-server-cert\") pod \"cluster-proxy-proxy-agent-d96fff796-nlgnl\" (UID: \"d3e94ba9-6d0d-433f-bf55-90ff5b6dd994\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-d96fff796-nlgnl" Apr 16 18:16:42.405415 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.405350 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/be0027a7-e3ae-4c79-8020-883f6b6eda09-tmp-dir\") pod \"dns-default-gd5b4\" (UID: \"be0027a7-e3ae-4c79-8020-883f6b6eda09\") " pod="openshift-dns/dns-default-gd5b4" Apr 16 18:16:42.405498 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.405467 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca\" (UniqueName: \"kubernetes.io/secret/d3e94ba9-6d0d-433f-bf55-90ff5b6dd994-ca\") pod \"cluster-proxy-proxy-agent-d96fff796-nlgnl\" (UID: \"d3e94ba9-6d0d-433f-bf55-90ff5b6dd994\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-d96fff796-nlgnl" Apr 16 18:16:42.405589 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.405507 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be0027a7-e3ae-4c79-8020-883f6b6eda09-config-volume\") pod \"dns-default-gd5b4\" (UID: \"be0027a7-e3ae-4c79-8020-883f6b6eda09\") " pod="openshift-dns/dns-default-gd5b4" Apr 16 18:16:42.405985 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.405961 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/d3e94ba9-6d0d-433f-bf55-90ff5b6dd994-hub-kubeconfig\") pod \"cluster-proxy-proxy-agent-d96fff796-nlgnl\" (UID: \"d3e94ba9-6d0d-433f-bf55-90ff5b6dd994\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-d96fff796-nlgnl" Apr 16 18:16:42.406053 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.406008 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/804432d4-ca03-499d-9807-b4521b7b7b63-klusterlet-config\") pod \"klusterlet-addon-workmgr-75cc7fdf4-7cnk6\" (UID: \"804432d4-ca03-499d-9807-b4521b7b7b63\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-75cc7fdf4-7cnk6" Apr 16 18:16:42.413267 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.413058 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vst4f\" (UniqueName: \"kubernetes.io/projected/be0027a7-e3ae-4c79-8020-883f6b6eda09-kube-api-access-vst4f\") pod \"dns-default-gd5b4\" (UID: \"be0027a7-e3ae-4c79-8020-883f6b6eda09\") " pod="openshift-dns/dns-default-gd5b4" Apr 16 18:16:42.413267 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.413177 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hz64\" (UniqueName: \"kubernetes.io/projected/804432d4-ca03-499d-9807-b4521b7b7b63-kube-api-access-6hz64\") pod \"klusterlet-addon-workmgr-75cc7fdf4-7cnk6\" (UID: \"804432d4-ca03-499d-9807-b4521b7b7b63\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-75cc7fdf4-7cnk6" Apr 16 18:16:42.413427 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.413348 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzrtb\" (UniqueName: \"kubernetes.io/projected/63acc75e-52de-45b3-a91a-8c41889d9a55-kube-api-access-kzrtb\") pod \"ingress-canary-tblwn\" (UID: \"63acc75e-52de-45b3-a91a-8c41889d9a55\") " pod="openshift-ingress-canary/ingress-canary-tblwn" Apr 16 18:16:42.413967 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.413587 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-64zmv\" (UniqueName: \"kubernetes.io/projected/d3e94ba9-6d0d-433f-bf55-90ff5b6dd994-kube-api-access-64zmv\") pod \"cluster-proxy-proxy-agent-d96fff796-nlgnl\" (UID: \"d3e94ba9-6d0d-433f-bf55-90ff5b6dd994\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-d96fff796-nlgnl" Apr 16 18:16:42.460564 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.460528 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-56ddbf785c-smgfg" Apr 16 18:16:42.480464 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.480429 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-75cc7fdf4-7cnk6" Apr 16 18:16:42.512422 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.512384 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-d96fff796-nlgnl" Apr 16 18:16:42.777939 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.777905 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-7b74c" Apr 16 18:16:42.778582 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.777905 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hgcdt" Apr 16 18:16:42.780709 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.780685 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 16 18:16:42.780838 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.780744 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"original-pull-secret\"" Apr 16 18:16:42.780838 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.780787 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-pwdr4\"" Apr 16 18:16:42.806434 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.806412 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-registry-tls\") pod \"image-registry-844ff7bf89-wscd9\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" Apr 16 18:16:42.806568 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:42.806553 2570 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 18:16:42.806617 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:42.806570 2570 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-844ff7bf89-wscd9: secret "image-registry-tls" not found Apr 16 18:16:42.806654 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:42.806631 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-registry-tls podName:d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd nodeName:}" failed. No retries permitted until 2026-04-16 18:16:43.806617084 +0000 UTC m=+34.612369867 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-registry-tls") pod "image-registry-844ff7bf89-wscd9" (UID: "d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd") : secret "image-registry-tls" not found Apr 16 18:16:42.907579 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.907536 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/63acc75e-52de-45b3-a91a-8c41889d9a55-cert\") pod \"ingress-canary-tblwn\" (UID: \"63acc75e-52de-45b3-a91a-8c41889d9a55\") " pod="openshift-ingress-canary/ingress-canary-tblwn" Apr 16 18:16:42.907762 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.907632 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/be0027a7-e3ae-4c79-8020-883f6b6eda09-metrics-tls\") pod \"dns-default-gd5b4\" (UID: \"be0027a7-e3ae-4c79-8020-883f6b6eda09\") " pod="openshift-dns/dns-default-gd5b4" Apr 16 18:16:42.907762 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:42.907675 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/90976564-8bbb-407b-a345-f362c0c02c2d-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-297lb\" (UID: \"90976564-8bbb-407b-a345-f362c0c02c2d\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-297lb" Apr 16 18:16:42.907762 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:42.907705 2570 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 18:16:42.907917 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:42.907763 2570 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 18:16:42.907917 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:42.907780 2570 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 16 18:16:42.907917 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:42.907767 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63acc75e-52de-45b3-a91a-8c41889d9a55-cert podName:63acc75e-52de-45b3-a91a-8c41889d9a55 nodeName:}" failed. No retries permitted until 2026-04-16 18:16:43.907751257 +0000 UTC m=+34.713504035 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/63acc75e-52de-45b3-a91a-8c41889d9a55-cert") pod "ingress-canary-tblwn" (UID: "63acc75e-52de-45b3-a91a-8c41889d9a55") : secret "canary-serving-cert" not found Apr 16 18:16:42.907917 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:42.907835 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90976564-8bbb-407b-a345-f362c0c02c2d-networking-console-plugin-cert podName:90976564-8bbb-407b-a345-f362c0c02c2d nodeName:}" failed. No retries permitted until 2026-04-16 18:16:43.907819648 +0000 UTC m=+34.713572432 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/90976564-8bbb-407b-a345-f362c0c02c2d-networking-console-plugin-cert") pod "networking-console-plugin-5cb6cf4cb4-297lb" (UID: "90976564-8bbb-407b-a345-f362c0c02c2d") : secret "networking-console-plugin-cert" not found Apr 16 18:16:42.907917 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:42.907852 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be0027a7-e3ae-4c79-8020-883f6b6eda09-metrics-tls podName:be0027a7-e3ae-4c79-8020-883f6b6eda09 nodeName:}" failed. No retries permitted until 2026-04-16 18:16:43.907842836 +0000 UTC m=+34.713595619 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/be0027a7-e3ae-4c79-8020-883f6b6eda09-metrics-tls") pod "dns-default-gd5b4" (UID: "be0027a7-e3ae-4c79-8020-883f6b6eda09") : secret "dns-default-metrics-tls" not found Apr 16 18:16:43.412606 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:43.412567 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-96lzw\" (UniqueName: \"kubernetes.io/projected/6d72f360-ffda-4447-8b43-c1059ff81bf3-kube-api-access-96lzw\") pod \"network-check-target-fzg6h\" (UID: \"6d72f360-ffda-4447-8b43-c1059ff81bf3\") " pod="openshift-network-diagnostics/network-check-target-fzg6h" Apr 16 18:16:43.412823 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:43.412623 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5af0e6ec-389a-47dd-afc0-725b505e4635-metrics-certs\") pod \"network-metrics-daemon-hgcdt\" (UID: \"5af0e6ec-389a-47dd-afc0-725b505e4635\") " pod="openshift-multus/network-metrics-daemon-hgcdt" Apr 16 18:16:43.412823 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:43.412700 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 18:16:43.412823 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:43.412725 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 18:16:43.412823 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:43.412737 2570 projected.go:194] Error preparing data for projected volume kube-api-access-96lzw for pod openshift-network-diagnostics/network-check-target-fzg6h: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:16:43.412823 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:43.412740 2570 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Apr 16 18:16:43.412823 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:43.412803 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5af0e6ec-389a-47dd-afc0-725b505e4635-metrics-certs podName:5af0e6ec-389a-47dd-afc0-725b505e4635 nodeName:}" failed. No retries permitted until 2026-04-16 18:17:15.412783234 +0000 UTC m=+66.218536015 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5af0e6ec-389a-47dd-afc0-725b505e4635-metrics-certs") pod "network-metrics-daemon-hgcdt" (UID: "5af0e6ec-389a-47dd-afc0-725b505e4635") : secret "metrics-daemon-secret" not found Apr 16 18:16:43.412823 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:43.412821 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d72f360-ffda-4447-8b43-c1059ff81bf3-kube-api-access-96lzw podName:6d72f360-ffda-4447-8b43-c1059ff81bf3 nodeName:}" failed. No retries permitted until 2026-04-16 18:17:15.412811822 +0000 UTC m=+66.218564601 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-96lzw" (UniqueName: "kubernetes.io/projected/6d72f360-ffda-4447-8b43-c1059ff81bf3-kube-api-access-96lzw") pod "network-check-target-fzg6h" (UID: "6d72f360-ffda-4447-8b43-c1059ff81bf3") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:16:43.777806 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:43.777732 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fzg6h" Apr 16 18:16:43.780407 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:43.780383 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 16 18:16:43.780791 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:43.780429 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-p8pr6\"" Apr 16 18:16:43.781289 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:43.781268 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 16 18:16:43.817150 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:43.817124 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-registry-tls\") pod \"image-registry-844ff7bf89-wscd9\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" Apr 16 18:16:43.817324 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:43.817288 2570 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 18:16:43.817324 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:43.817302 2570 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-844ff7bf89-wscd9: secret "image-registry-tls" not found Apr 16 18:16:43.817431 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:43.817353 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-registry-tls podName:d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd nodeName:}" failed. No retries permitted until 2026-04-16 18:16:45.817337587 +0000 UTC m=+36.623090381 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-registry-tls") pod "image-registry-844ff7bf89-wscd9" (UID: "d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd") : secret "image-registry-tls" not found Apr 16 18:16:43.918376 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:43.918335 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/63acc75e-52de-45b3-a91a-8c41889d9a55-cert\") pod \"ingress-canary-tblwn\" (UID: \"63acc75e-52de-45b3-a91a-8c41889d9a55\") " pod="openshift-ingress-canary/ingress-canary-tblwn" Apr 16 18:16:43.918535 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:43.918436 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/be0027a7-e3ae-4c79-8020-883f6b6eda09-metrics-tls\") pod \"dns-default-gd5b4\" (UID: \"be0027a7-e3ae-4c79-8020-883f6b6eda09\") " pod="openshift-dns/dns-default-gd5b4" Apr 16 18:16:43.918535 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:43.918480 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/90976564-8bbb-407b-a345-f362c0c02c2d-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-297lb\" (UID: \"90976564-8bbb-407b-a345-f362c0c02c2d\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-297lb" Apr 16 18:16:43.918535 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:43.918493 2570 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 18:16:43.918657 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:43.918575 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63acc75e-52de-45b3-a91a-8c41889d9a55-cert podName:63acc75e-52de-45b3-a91a-8c41889d9a55 nodeName:}" failed. No retries permitted until 2026-04-16 18:16:45.918555363 +0000 UTC m=+36.724308164 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/63acc75e-52de-45b3-a91a-8c41889d9a55-cert") pod "ingress-canary-tblwn" (UID: "63acc75e-52de-45b3-a91a-8c41889d9a55") : secret "canary-serving-cert" not found Apr 16 18:16:43.918657 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:43.918582 2570 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 18:16:43.918657 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:43.918617 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be0027a7-e3ae-4c79-8020-883f6b6eda09-metrics-tls podName:be0027a7-e3ae-4c79-8020-883f6b6eda09 nodeName:}" failed. No retries permitted until 2026-04-16 18:16:45.918607055 +0000 UTC m=+36.724359834 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/be0027a7-e3ae-4c79-8020-883f6b6eda09-metrics-tls") pod "dns-default-gd5b4" (UID: "be0027a7-e3ae-4c79-8020-883f6b6eda09") : secret "dns-default-metrics-tls" not found Apr 16 18:16:43.918657 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:43.918628 2570 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 16 18:16:43.918830 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:43.918686 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90976564-8bbb-407b-a345-f362c0c02c2d-networking-console-plugin-cert podName:90976564-8bbb-407b-a345-f362c0c02c2d nodeName:}" failed. No retries permitted until 2026-04-16 18:16:45.918669538 +0000 UTC m=+36.724422322 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/90976564-8bbb-407b-a345-f362c0c02c2d-networking-console-plugin-cert") pod "networking-console-plugin-5cb6cf4cb4-297lb" (UID: "90976564-8bbb-407b-a345-f362c0c02c2d") : secret "networking-console-plugin-cert" not found Apr 16 18:16:44.603351 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:44.603013 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-56ddbf785c-smgfg"] Apr 16 18:16:44.603351 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:44.603309 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-75cc7fdf4-7cnk6"] Apr 16 18:16:44.604185 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:44.604148 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/cluster-proxy-proxy-agent-d96fff796-nlgnl"] Apr 16 18:16:44.687078 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:44.686978 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3e94ba9_6d0d_433f_bf55_90ff5b6dd994.slice/crio-b00aef6caa470cdbe4e40f20ae8f68045248ef63bbf533cc4c1a70ddcc4d6bef WatchSource:0}: Error finding container b00aef6caa470cdbe4e40f20ae8f68045248ef63bbf533cc4c1a70ddcc4d6bef: Status 404 returned error can't find the container with id b00aef6caa470cdbe4e40f20ae8f68045248ef63bbf533cc4c1a70ddcc4d6bef Apr 16 18:16:44.687580 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:44.687548 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d10439d_2d76_4276_9780_19e404a46b29.slice/crio-d10cc6b2f7f8dbdae43b7ce0b24c862e0173c3ff3f2b51e812aae5a77fdb548b WatchSource:0}: Error finding container d10cc6b2f7f8dbdae43b7ce0b24c862e0173c3ff3f2b51e812aae5a77fdb548b: Status 404 returned error can't find the container with id d10cc6b2f7f8dbdae43b7ce0b24c862e0173c3ff3f2b51e812aae5a77fdb548b Apr 16 18:16:44.688288 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:44.688260 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod804432d4_ca03_499d_9807_b4521b7b7b63.slice/crio-9f44853720ca99344c5ce2570687a5fe91d4a6a60f7a83ece7b9a460b2387778 WatchSource:0}: Error finding container 9f44853720ca99344c5ce2570687a5fe91d4a6a60f7a83ece7b9a460b2387778: Status 404 returned error can't find the container with id 9f44853720ca99344c5ce2570687a5fe91d4a6a60f7a83ece7b9a460b2387778 Apr 16 18:16:44.953010 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:44.952907 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lxx4t" event={"ID":"259c30de-27f1-414c-b384-b90b6e241cd8","Type":"ContainerStarted","Data":"5232d106efc62a54c9411dc4e6d12370eb3a416b6a19bc88582b3e8e1a703fd5"} Apr 16 18:16:44.954187 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:44.954149 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-d96fff796-nlgnl" event={"ID":"d3e94ba9-6d0d-433f-bf55-90ff5b6dd994","Type":"ContainerStarted","Data":"b00aef6caa470cdbe4e40f20ae8f68045248ef63bbf533cc4c1a70ddcc4d6bef"} Apr 16 18:16:44.955352 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:44.955329 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-56ddbf785c-smgfg" event={"ID":"1d10439d-2d76-4276-9780-19e404a46b29","Type":"ContainerStarted","Data":"d10cc6b2f7f8dbdae43b7ce0b24c862e0173c3ff3f2b51e812aae5a77fdb548b"} Apr 16 18:16:44.956507 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:44.956475 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-75cc7fdf4-7cnk6" event={"ID":"804432d4-ca03-499d-9807-b4521b7b7b63","Type":"ContainerStarted","Data":"9f44853720ca99344c5ce2570687a5fe91d4a6a60f7a83ece7b9a460b2387778"} Apr 16 18:16:45.530998 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:45.530944 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/579176b9-8011-401d-aee2-a97cda1ea10f-original-pull-secret\") pod \"global-pull-secret-syncer-7b74c\" (UID: \"579176b9-8011-401d-aee2-a97cda1ea10f\") " pod="kube-system/global-pull-secret-syncer-7b74c" Apr 16 18:16:45.545348 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:45.545268 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/579176b9-8011-401d-aee2-a97cda1ea10f-original-pull-secret\") pod \"global-pull-secret-syncer-7b74c\" (UID: \"579176b9-8011-401d-aee2-a97cda1ea10f\") " pod="kube-system/global-pull-secret-syncer-7b74c" Apr 16 18:16:45.789648 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:45.789254 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-7b74c" Apr 16 18:16:45.834492 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:45.833904 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-registry-tls\") pod \"image-registry-844ff7bf89-wscd9\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" Apr 16 18:16:45.834492 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:45.834049 2570 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 18:16:45.834492 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:45.834067 2570 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-844ff7bf89-wscd9: secret "image-registry-tls" not found Apr 16 18:16:45.834492 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:45.834123 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-registry-tls podName:d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd nodeName:}" failed. No retries permitted until 2026-04-16 18:16:49.834105037 +0000 UTC m=+40.639857831 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-registry-tls") pod "image-registry-844ff7bf89-wscd9" (UID: "d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd") : secret "image-registry-tls" not found Apr 16 18:16:45.934595 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:45.934346 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/63acc75e-52de-45b3-a91a-8c41889d9a55-cert\") pod \"ingress-canary-tblwn\" (UID: \"63acc75e-52de-45b3-a91a-8c41889d9a55\") " pod="openshift-ingress-canary/ingress-canary-tblwn" Apr 16 18:16:45.934595 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:45.934627 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/be0027a7-e3ae-4c79-8020-883f6b6eda09-metrics-tls\") pod \"dns-default-gd5b4\" (UID: \"be0027a7-e3ae-4c79-8020-883f6b6eda09\") " pod="openshift-dns/dns-default-gd5b4" Apr 16 18:16:45.934964 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:45.934674 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/90976564-8bbb-407b-a345-f362c0c02c2d-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-297lb\" (UID: \"90976564-8bbb-407b-a345-f362c0c02c2d\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-297lb" Apr 16 18:16:45.934964 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:45.934806 2570 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 16 18:16:45.934964 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:45.934869 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90976564-8bbb-407b-a345-f362c0c02c2d-networking-console-plugin-cert podName:90976564-8bbb-407b-a345-f362c0c02c2d nodeName:}" failed. No retries permitted until 2026-04-16 18:16:49.934849075 +0000 UTC m=+40.740601867 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/90976564-8bbb-407b-a345-f362c0c02c2d-networking-console-plugin-cert") pod "networking-console-plugin-5cb6cf4cb4-297lb" (UID: "90976564-8bbb-407b-a345-f362c0c02c2d") : secret "networking-console-plugin-cert" not found Apr 16 18:16:45.935318 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:45.935296 2570 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 18:16:45.935389 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:45.935353 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63acc75e-52de-45b3-a91a-8c41889d9a55-cert podName:63acc75e-52de-45b3-a91a-8c41889d9a55 nodeName:}" failed. No retries permitted until 2026-04-16 18:16:49.935337022 +0000 UTC m=+40.741089816 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/63acc75e-52de-45b3-a91a-8c41889d9a55-cert") pod "ingress-canary-tblwn" (UID: "63acc75e-52de-45b3-a91a-8c41889d9a55") : secret "canary-serving-cert" not found Apr 16 18:16:45.935444 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:45.935409 2570 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 18:16:45.935444 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:45.935443 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be0027a7-e3ae-4c79-8020-883f6b6eda09-metrics-tls podName:be0027a7-e3ae-4c79-8020-883f6b6eda09 nodeName:}" failed. No retries permitted until 2026-04-16 18:16:49.935430926 +0000 UTC m=+40.741183720 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/be0027a7-e3ae-4c79-8020-883f6b6eda09-metrics-tls") pod "dns-default-gd5b4" (UID: "be0027a7-e3ae-4c79-8020-883f6b6eda09") : secret "dns-default-metrics-tls" not found Apr 16 18:16:45.974352 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:45.974141 2570 generic.go:358] "Generic (PLEG): container finished" podID="259c30de-27f1-414c-b384-b90b6e241cd8" containerID="5232d106efc62a54c9411dc4e6d12370eb3a416b6a19bc88582b3e8e1a703fd5" exitCode=0 Apr 16 18:16:45.974890 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:45.974419 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lxx4t" event={"ID":"259c30de-27f1-414c-b384-b90b6e241cd8","Type":"ContainerDied","Data":"5232d106efc62a54c9411dc4e6d12370eb3a416b6a19bc88582b3e8e1a703fd5"} Apr 16 18:16:45.996211 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:45.995961 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-7b74c"] Apr 16 18:16:46.009386 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:16:46.009356 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod579176b9_8011_401d_aee2_a97cda1ea10f.slice/crio-18aec61470eaed432217fbca852b636634f689fdc1bf07650b12313cc1567ffe WatchSource:0}: Error finding container 18aec61470eaed432217fbca852b636634f689fdc1bf07650b12313cc1567ffe: Status 404 returned error can't find the container with id 18aec61470eaed432217fbca852b636634f689fdc1bf07650b12313cc1567ffe Apr 16 18:16:46.983034 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:46.982997 2570 generic.go:358] "Generic (PLEG): container finished" podID="259c30de-27f1-414c-b384-b90b6e241cd8" containerID="e521520a16f56e6fecc8ae9708a2f52eb4f39bd620f83370a601dc9086b7bc1e" exitCode=0 Apr 16 18:16:46.983702 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:46.983100 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lxx4t" event={"ID":"259c30de-27f1-414c-b384-b90b6e241cd8","Type":"ContainerDied","Data":"e521520a16f56e6fecc8ae9708a2f52eb4f39bd620f83370a601dc9086b7bc1e"} Apr 16 18:16:46.985469 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:46.985416 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-7b74c" event={"ID":"579176b9-8011-401d-aee2-a97cda1ea10f","Type":"ContainerStarted","Data":"18aec61470eaed432217fbca852b636634f689fdc1bf07650b12313cc1567ffe"} Apr 16 18:16:49.872284 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:49.872250 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-registry-tls\") pod \"image-registry-844ff7bf89-wscd9\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" Apr 16 18:16:49.872779 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:49.872383 2570 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 18:16:49.872779 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:49.872399 2570 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-844ff7bf89-wscd9: secret "image-registry-tls" not found Apr 16 18:16:49.872779 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:49.872464 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-registry-tls podName:d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd nodeName:}" failed. No retries permitted until 2026-04-16 18:16:57.872435549 +0000 UTC m=+48.678188334 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-registry-tls") pod "image-registry-844ff7bf89-wscd9" (UID: "d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd") : secret "image-registry-tls" not found Apr 16 18:16:49.973203 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:49.973161 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/63acc75e-52de-45b3-a91a-8c41889d9a55-cert\") pod \"ingress-canary-tblwn\" (UID: \"63acc75e-52de-45b3-a91a-8c41889d9a55\") " pod="openshift-ingress-canary/ingress-canary-tblwn" Apr 16 18:16:49.973377 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:49.973274 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/be0027a7-e3ae-4c79-8020-883f6b6eda09-metrics-tls\") pod \"dns-default-gd5b4\" (UID: \"be0027a7-e3ae-4c79-8020-883f6b6eda09\") " pod="openshift-dns/dns-default-gd5b4" Apr 16 18:16:49.973377 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:49.973315 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/90976564-8bbb-407b-a345-f362c0c02c2d-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-297lb\" (UID: \"90976564-8bbb-407b-a345-f362c0c02c2d\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-297lb" Apr 16 18:16:49.973377 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:49.973345 2570 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 18:16:49.973546 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:49.973422 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63acc75e-52de-45b3-a91a-8c41889d9a55-cert podName:63acc75e-52de-45b3-a91a-8c41889d9a55 nodeName:}" failed. No retries permitted until 2026-04-16 18:16:57.97340117 +0000 UTC m=+48.779153968 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/63acc75e-52de-45b3-a91a-8c41889d9a55-cert") pod "ingress-canary-tblwn" (UID: "63acc75e-52de-45b3-a91a-8c41889d9a55") : secret "canary-serving-cert" not found Apr 16 18:16:49.973546 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:49.973437 2570 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 18:16:49.973546 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:49.973443 2570 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 16 18:16:49.973546 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:49.973512 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be0027a7-e3ae-4c79-8020-883f6b6eda09-metrics-tls podName:be0027a7-e3ae-4c79-8020-883f6b6eda09 nodeName:}" failed. No retries permitted until 2026-04-16 18:16:57.973480695 +0000 UTC m=+48.779233493 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/be0027a7-e3ae-4c79-8020-883f6b6eda09-metrics-tls") pod "dns-default-gd5b4" (UID: "be0027a7-e3ae-4c79-8020-883f6b6eda09") : secret "dns-default-metrics-tls" not found Apr 16 18:16:49.973749 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:49.973551 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90976564-8bbb-407b-a345-f362c0c02c2d-networking-console-plugin-cert podName:90976564-8bbb-407b-a345-f362c0c02c2d nodeName:}" failed. No retries permitted until 2026-04-16 18:16:57.973532913 +0000 UTC m=+48.779285699 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/90976564-8bbb-407b-a345-f362c0c02c2d-networking-console-plugin-cert") pod "networking-console-plugin-5cb6cf4cb4-297lb" (UID: "90976564-8bbb-407b-a345-f362c0c02c2d") : secret "networking-console-plugin-cert" not found Apr 16 18:16:52.997994 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:52.997955 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-d96fff796-nlgnl" event={"ID":"d3e94ba9-6d0d-433f-bf55-90ff5b6dd994","Type":"ContainerStarted","Data":"8103aab85d6acf2d6012a3afb18da8d710b4fb94d34cbec5e19427bda11057a9"} Apr 16 18:16:52.999395 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:52.999373 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-56ddbf785c-smgfg" event={"ID":"1d10439d-2d76-4276-9780-19e404a46b29","Type":"ContainerStarted","Data":"21c33fd6aae192e2f0751f03360837743a839516cdeb95b50ecc2a89a9923f55"} Apr 16 18:16:53.000899 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:53.000878 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-75cc7fdf4-7cnk6" event={"ID":"804432d4-ca03-499d-9807-b4521b7b7b63","Type":"ContainerStarted","Data":"732606d6336902413cc48e594244406ccbeb4efe821ce5c9cda9e53ac23f7fe3"} Apr 16 18:16:53.001335 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:53.001316 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-75cc7fdf4-7cnk6" Apr 16 18:16:53.003661 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:53.003519 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-75cc7fdf4-7cnk6" Apr 16 18:16:53.005247 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:53.005213 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lxx4t" event={"ID":"259c30de-27f1-414c-b384-b90b6e241cd8","Type":"ContainerStarted","Data":"1eed0b76b4566e0a5d4e7a8884d92da2c1e51cd6bf66d3cbf341a284e455362e"} Apr 16 18:16:53.019036 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:53.018965 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-56ddbf785c-smgfg" podStartSLOduration=25.92913751 podStartE2EDuration="34.0189461s" podCreationTimestamp="2026-04-16 18:16:19 +0000 UTC" firstStartedPulling="2026-04-16 18:16:44.705513946 +0000 UTC m=+35.511266731" lastFinishedPulling="2026-04-16 18:16:52.795322525 +0000 UTC m=+43.601075321" observedRunningTime="2026-04-16 18:16:53.018307558 +0000 UTC m=+43.824060366" watchObservedRunningTime="2026-04-16 18:16:53.0189461 +0000 UTC m=+43.824698903" Apr 16 18:16:53.047091 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:53.047034 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-lxx4t" podStartSLOduration=10.636822568 podStartE2EDuration="43.047021662s" podCreationTimestamp="2026-04-16 18:16:10 +0000 UTC" firstStartedPulling="2026-04-16 18:16:12.318089238 +0000 UTC m=+3.123842019" lastFinishedPulling="2026-04-16 18:16:44.728288321 +0000 UTC m=+35.534041113" observedRunningTime="2026-04-16 18:16:53.046809101 +0000 UTC m=+43.852561913" watchObservedRunningTime="2026-04-16 18:16:53.047021662 +0000 UTC m=+43.852774463" Apr 16 18:16:53.078202 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:53.078153 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-75cc7fdf4-7cnk6" podStartSLOduration=25.988253514 podStartE2EDuration="34.078137035s" podCreationTimestamp="2026-04-16 18:16:19 +0000 UTC" firstStartedPulling="2026-04-16 18:16:44.705495901 +0000 UTC m=+35.511248681" lastFinishedPulling="2026-04-16 18:16:52.795379421 +0000 UTC m=+43.601132202" observedRunningTime="2026-04-16 18:16:53.077062914 +0000 UTC m=+43.882815719" watchObservedRunningTime="2026-04-16 18:16:53.078137035 +0000 UTC m=+43.883889842" Apr 16 18:16:54.009191 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:54.009126 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-7b74c" event={"ID":"579176b9-8011-401d-aee2-a97cda1ea10f","Type":"ContainerStarted","Data":"75d312cb3245b015f4ae5ccdee819967af4c03f899487bf05a7af8c3aa470e2e"} Apr 16 18:16:54.025833 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:54.025780 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/global-pull-secret-syncer-7b74c" podStartSLOduration=18.235133457 podStartE2EDuration="25.025751327s" podCreationTimestamp="2026-04-16 18:16:29 +0000 UTC" firstStartedPulling="2026-04-16 18:16:46.014353344 +0000 UTC m=+36.820106138" lastFinishedPulling="2026-04-16 18:16:52.804971219 +0000 UTC m=+43.610724008" observedRunningTime="2026-04-16 18:16:54.024407271 +0000 UTC m=+44.830160073" watchObservedRunningTime="2026-04-16 18:16:54.025751327 +0000 UTC m=+44.831504130" Apr 16 18:16:56.017196 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:56.017156 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-d96fff796-nlgnl" event={"ID":"d3e94ba9-6d0d-433f-bf55-90ff5b6dd994","Type":"ContainerStarted","Data":"f3090bcbc874f499407f8609a04f4ae4a6a00305b0315f6c81d950fde30d0649"} Apr 16 18:16:56.017196 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:56.017196 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-d96fff796-nlgnl" event={"ID":"d3e94ba9-6d0d-433f-bf55-90ff5b6dd994","Type":"ContainerStarted","Data":"d90587a1b1b693929718d2ea98367b4ce152bff942c4c2a1d68085c166864090"} Apr 16 18:16:56.037238 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:56.037181 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-d96fff796-nlgnl" podStartSLOduration=26.293423311 podStartE2EDuration="37.037156111s" podCreationTimestamp="2026-04-16 18:16:19 +0000 UTC" firstStartedPulling="2026-04-16 18:16:44.705392194 +0000 UTC m=+35.511144975" lastFinishedPulling="2026-04-16 18:16:55.449124981 +0000 UTC m=+46.254877775" observedRunningTime="2026-04-16 18:16:56.03677211 +0000 UTC m=+46.842524910" watchObservedRunningTime="2026-04-16 18:16:56.037156111 +0000 UTC m=+46.842908911" Apr 16 18:16:57.940919 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:57.940879 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-registry-tls\") pod \"image-registry-844ff7bf89-wscd9\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" Apr 16 18:16:57.941339 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:57.941029 2570 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 18:16:57.941339 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:57.941041 2570 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-844ff7bf89-wscd9: secret "image-registry-tls" not found Apr 16 18:16:57.941339 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:57.941093 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-registry-tls podName:d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd nodeName:}" failed. No retries permitted until 2026-04-16 18:17:13.941078823 +0000 UTC m=+64.746831615 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-registry-tls") pod "image-registry-844ff7bf89-wscd9" (UID: "d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd") : secret "image-registry-tls" not found Apr 16 18:16:58.042034 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:58.041998 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/63acc75e-52de-45b3-a91a-8c41889d9a55-cert\") pod \"ingress-canary-tblwn\" (UID: \"63acc75e-52de-45b3-a91a-8c41889d9a55\") " pod="openshift-ingress-canary/ingress-canary-tblwn" Apr 16 18:16:58.042199 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:58.042061 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/be0027a7-e3ae-4c79-8020-883f6b6eda09-metrics-tls\") pod \"dns-default-gd5b4\" (UID: \"be0027a7-e3ae-4c79-8020-883f6b6eda09\") " pod="openshift-dns/dns-default-gd5b4" Apr 16 18:16:58.042199 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:16:58.042087 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/90976564-8bbb-407b-a345-f362c0c02c2d-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-297lb\" (UID: \"90976564-8bbb-407b-a345-f362c0c02c2d\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-297lb" Apr 16 18:16:58.042199 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:58.042143 2570 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 18:16:58.042199 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:58.042163 2570 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 16 18:16:58.042199 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:58.042190 2570 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 18:16:58.042512 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:58.042217 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63acc75e-52de-45b3-a91a-8c41889d9a55-cert podName:63acc75e-52de-45b3-a91a-8c41889d9a55 nodeName:}" failed. No retries permitted until 2026-04-16 18:17:14.042195501 +0000 UTC m=+64.847948298 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/63acc75e-52de-45b3-a91a-8c41889d9a55-cert") pod "ingress-canary-tblwn" (UID: "63acc75e-52de-45b3-a91a-8c41889d9a55") : secret "canary-serving-cert" not found Apr 16 18:16:58.042512 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:58.042254 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be0027a7-e3ae-4c79-8020-883f6b6eda09-metrics-tls podName:be0027a7-e3ae-4c79-8020-883f6b6eda09 nodeName:}" failed. No retries permitted until 2026-04-16 18:17:14.04222387 +0000 UTC m=+64.847976662 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/be0027a7-e3ae-4c79-8020-883f6b6eda09-metrics-tls") pod "dns-default-gd5b4" (UID: "be0027a7-e3ae-4c79-8020-883f6b6eda09") : secret "dns-default-metrics-tls" not found Apr 16 18:16:58.042512 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:16:58.042269 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90976564-8bbb-407b-a345-f362c0c02c2d-networking-console-plugin-cert podName:90976564-8bbb-407b-a345-f362c0c02c2d nodeName:}" failed. No retries permitted until 2026-04-16 18:17:14.042261924 +0000 UTC m=+64.848014703 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/90976564-8bbb-407b-a345-f362c0c02c2d-networking-console-plugin-cert") pod "networking-console-plugin-5cb6cf4cb4-297lb" (UID: "90976564-8bbb-407b-a345-f362c0c02c2d") : secret "networking-console-plugin-cert" not found Apr 16 18:17:08.948642 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:17:08.948616 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hls95" Apr 16 18:17:13.960881 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:17:13.960845 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-registry-tls\") pod \"image-registry-844ff7bf89-wscd9\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" Apr 16 18:17:13.961302 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:17:13.960999 2570 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 18:17:13.961302 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:17:13.961020 2570 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-844ff7bf89-wscd9: secret "image-registry-tls" not found Apr 16 18:17:13.961302 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:17:13.961073 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-registry-tls podName:d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd nodeName:}" failed. No retries permitted until 2026-04-16 18:17:45.961058864 +0000 UTC m=+96.766811642 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-registry-tls") pod "image-registry-844ff7bf89-wscd9" (UID: "d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd") : secret "image-registry-tls" not found Apr 16 18:17:14.061980 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:17:14.061950 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/63acc75e-52de-45b3-a91a-8c41889d9a55-cert\") pod \"ingress-canary-tblwn\" (UID: \"63acc75e-52de-45b3-a91a-8c41889d9a55\") " pod="openshift-ingress-canary/ingress-canary-tblwn" Apr 16 18:17:14.062129 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:17:14.062011 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/be0027a7-e3ae-4c79-8020-883f6b6eda09-metrics-tls\") pod \"dns-default-gd5b4\" (UID: \"be0027a7-e3ae-4c79-8020-883f6b6eda09\") " pod="openshift-dns/dns-default-gd5b4" Apr 16 18:17:14.062129 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:17:14.062037 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/90976564-8bbb-407b-a345-f362c0c02c2d-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-297lb\" (UID: \"90976564-8bbb-407b-a345-f362c0c02c2d\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-297lb" Apr 16 18:17:14.062129 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:17:14.062085 2570 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 18:17:14.062257 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:17:14.062130 2570 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 16 18:17:14.062257 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:17:14.062139 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63acc75e-52de-45b3-a91a-8c41889d9a55-cert podName:63acc75e-52de-45b3-a91a-8c41889d9a55 nodeName:}" failed. No retries permitted until 2026-04-16 18:17:46.062125898 +0000 UTC m=+96.867878681 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/63acc75e-52de-45b3-a91a-8c41889d9a55-cert") pod "ingress-canary-tblwn" (UID: "63acc75e-52de-45b3-a91a-8c41889d9a55") : secret "canary-serving-cert" not found Apr 16 18:17:14.062257 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:17:14.062151 2570 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 18:17:14.062257 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:17:14.062176 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90976564-8bbb-407b-a345-f362c0c02c2d-networking-console-plugin-cert podName:90976564-8bbb-407b-a345-f362c0c02c2d nodeName:}" failed. No retries permitted until 2026-04-16 18:17:46.062164895 +0000 UTC m=+96.867917673 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/90976564-8bbb-407b-a345-f362c0c02c2d-networking-console-plugin-cert") pod "networking-console-plugin-5cb6cf4cb4-297lb" (UID: "90976564-8bbb-407b-a345-f362c0c02c2d") : secret "networking-console-plugin-cert" not found Apr 16 18:17:14.062257 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:17:14.062203 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be0027a7-e3ae-4c79-8020-883f6b6eda09-metrics-tls podName:be0027a7-e3ae-4c79-8020-883f6b6eda09 nodeName:}" failed. No retries permitted until 2026-04-16 18:17:46.062187709 +0000 UTC m=+96.867940490 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/be0027a7-e3ae-4c79-8020-883f6b6eda09-metrics-tls") pod "dns-default-gd5b4" (UID: "be0027a7-e3ae-4c79-8020-883f6b6eda09") : secret "dns-default-metrics-tls" not found Apr 16 18:17:15.473013 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:17:15.472977 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-96lzw\" (UniqueName: \"kubernetes.io/projected/6d72f360-ffda-4447-8b43-c1059ff81bf3-kube-api-access-96lzw\") pod \"network-check-target-fzg6h\" (UID: \"6d72f360-ffda-4447-8b43-c1059ff81bf3\") " pod="openshift-network-diagnostics/network-check-target-fzg6h" Apr 16 18:17:15.473013 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:17:15.473016 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5af0e6ec-389a-47dd-afc0-725b505e4635-metrics-certs\") pod \"network-metrics-daemon-hgcdt\" (UID: \"5af0e6ec-389a-47dd-afc0-725b505e4635\") " pod="openshift-multus/network-metrics-daemon-hgcdt" Apr 16 18:17:15.473470 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:17:15.473145 2570 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Apr 16 18:17:15.473470 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:17:15.473208 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5af0e6ec-389a-47dd-afc0-725b505e4635-metrics-certs podName:5af0e6ec-389a-47dd-afc0-725b505e4635 nodeName:}" failed. No retries permitted until 2026-04-16 18:18:19.473190794 +0000 UTC m=+130.278943587 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5af0e6ec-389a-47dd-afc0-725b505e4635-metrics-certs") pod "network-metrics-daemon-hgcdt" (UID: "5af0e6ec-389a-47dd-afc0-725b505e4635") : secret "metrics-daemon-secret" not found Apr 16 18:17:15.475563 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:17:15.475547 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 16 18:17:15.485624 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:17:15.485605 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 16 18:17:15.496773 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:17:15.496741 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-96lzw\" (UniqueName: \"kubernetes.io/projected/6d72f360-ffda-4447-8b43-c1059ff81bf3-kube-api-access-96lzw\") pod \"network-check-target-fzg6h\" (UID: \"6d72f360-ffda-4447-8b43-c1059ff81bf3\") " pod="openshift-network-diagnostics/network-check-target-fzg6h" Apr 16 18:17:15.589734 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:17:15.589706 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-p8pr6\"" Apr 16 18:17:15.597666 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:17:15.597649 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fzg6h" Apr 16 18:17:15.712415 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:17:15.712384 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-fzg6h"] Apr 16 18:17:15.715429 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:17:15.715405 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d72f360_ffda_4447_8b43_c1059ff81bf3.slice/crio-1374d8d459f537deb8e4476b24cb1930c3344278f1ac9c5d547ce11facb7aaf3 WatchSource:0}: Error finding container 1374d8d459f537deb8e4476b24cb1930c3344278f1ac9c5d547ce11facb7aaf3: Status 404 returned error can't find the container with id 1374d8d459f537deb8e4476b24cb1930c3344278f1ac9c5d547ce11facb7aaf3 Apr 16 18:17:16.066209 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:17:16.066172 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fzg6h" event={"ID":"6d72f360-ffda-4447-8b43-c1059ff81bf3","Type":"ContainerStarted","Data":"1374d8d459f537deb8e4476b24cb1930c3344278f1ac9c5d547ce11facb7aaf3"} Apr 16 18:17:19.074781 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:17:19.074701 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fzg6h" event={"ID":"6d72f360-ffda-4447-8b43-c1059ff81bf3","Type":"ContainerStarted","Data":"9f7da7b38e459a050b6bb8c210b1ddfc6c58e26d9b595cce8235cb3a71930620"} Apr 16 18:17:19.075131 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:17:19.074921 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fzg6h" Apr 16 18:17:19.091843 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:17:19.091790 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-target-fzg6h" podStartSLOduration=67.12042983 podStartE2EDuration="1m10.091778s" podCreationTimestamp="2026-04-16 18:16:09 +0000 UTC" firstStartedPulling="2026-04-16 18:17:15.718921795 +0000 UTC m=+66.524674588" lastFinishedPulling="2026-04-16 18:17:18.690269976 +0000 UTC m=+69.496022758" observedRunningTime="2026-04-16 18:17:19.090536326 +0000 UTC m=+69.896289151" watchObservedRunningTime="2026-04-16 18:17:19.091778 +0000 UTC m=+69.897530793" Apr 16 18:17:46.023776 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:17:46.023739 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-registry-tls\") pod \"image-registry-844ff7bf89-wscd9\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" Apr 16 18:17:46.024283 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:17:46.023880 2570 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 18:17:46.024283 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:17:46.023903 2570 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-844ff7bf89-wscd9: secret "image-registry-tls" not found Apr 16 18:17:46.024283 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:17:46.023969 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-registry-tls podName:d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd nodeName:}" failed. No retries permitted until 2026-04-16 18:18:50.023951978 +0000 UTC m=+160.829704757 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-registry-tls") pod "image-registry-844ff7bf89-wscd9" (UID: "d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd") : secret "image-registry-tls" not found Apr 16 18:17:46.124238 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:17:46.124190 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/90976564-8bbb-407b-a345-f362c0c02c2d-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-297lb\" (UID: \"90976564-8bbb-407b-a345-f362c0c02c2d\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-297lb" Apr 16 18:17:46.124426 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:17:46.124285 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/63acc75e-52de-45b3-a91a-8c41889d9a55-cert\") pod \"ingress-canary-tblwn\" (UID: \"63acc75e-52de-45b3-a91a-8c41889d9a55\") " pod="openshift-ingress-canary/ingress-canary-tblwn" Apr 16 18:17:46.124426 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:17:46.124327 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/be0027a7-e3ae-4c79-8020-883f6b6eda09-metrics-tls\") pod \"dns-default-gd5b4\" (UID: \"be0027a7-e3ae-4c79-8020-883f6b6eda09\") " pod="openshift-dns/dns-default-gd5b4" Apr 16 18:17:46.124426 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:17:46.124361 2570 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 16 18:17:46.124426 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:17:46.124408 2570 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 18:17:46.124426 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:17:46.124413 2570 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 18:17:46.124604 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:17:46.124446 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90976564-8bbb-407b-a345-f362c0c02c2d-networking-console-plugin-cert podName:90976564-8bbb-407b-a345-f362c0c02c2d nodeName:}" failed. No retries permitted until 2026-04-16 18:18:50.124424264 +0000 UTC m=+160.930177063 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/90976564-8bbb-407b-a345-f362c0c02c2d-networking-console-plugin-cert") pod "networking-console-plugin-5cb6cf4cb4-297lb" (UID: "90976564-8bbb-407b-a345-f362c0c02c2d") : secret "networking-console-plugin-cert" not found Apr 16 18:17:46.124604 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:17:46.124463 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be0027a7-e3ae-4c79-8020-883f6b6eda09-metrics-tls podName:be0027a7-e3ae-4c79-8020-883f6b6eda09 nodeName:}" failed. No retries permitted until 2026-04-16 18:18:50.124455073 +0000 UTC m=+160.930207860 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/be0027a7-e3ae-4c79-8020-883f6b6eda09-metrics-tls") pod "dns-default-gd5b4" (UID: "be0027a7-e3ae-4c79-8020-883f6b6eda09") : secret "dns-default-metrics-tls" not found Apr 16 18:17:46.124604 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:17:46.124473 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63acc75e-52de-45b3-a91a-8c41889d9a55-cert podName:63acc75e-52de-45b3-a91a-8c41889d9a55 nodeName:}" failed. No retries permitted until 2026-04-16 18:18:50.124467959 +0000 UTC m=+160.930220738 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/63acc75e-52de-45b3-a91a-8c41889d9a55-cert") pod "ingress-canary-tblwn" (UID: "63acc75e-52de-45b3-a91a-8c41889d9a55") : secret "canary-serving-cert" not found Apr 16 18:17:50.080359 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:17:50.080332 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fzg6h" Apr 16 18:18:19.477247 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:18:19.477187 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5af0e6ec-389a-47dd-afc0-725b505e4635-metrics-certs\") pod \"network-metrics-daemon-hgcdt\" (UID: \"5af0e6ec-389a-47dd-afc0-725b505e4635\") " pod="openshift-multus/network-metrics-daemon-hgcdt" Apr 16 18:18:19.477700 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:18:19.477328 2570 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Apr 16 18:18:19.477700 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:18:19.477404 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5af0e6ec-389a-47dd-afc0-725b505e4635-metrics-certs podName:5af0e6ec-389a-47dd-afc0-725b505e4635 nodeName:}" failed. No retries permitted until 2026-04-16 18:20:21.477387571 +0000 UTC m=+252.283140354 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5af0e6ec-389a-47dd-afc0-725b505e4635-metrics-certs") pod "network-metrics-daemon-hgcdt" (UID: "5af0e6ec-389a-47dd-afc0-725b505e4635") : secret "metrics-daemon-secret" not found Apr 16 18:18:45.123373 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:18:45.123332 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[registry-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" podUID="d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd" Apr 16 18:18:45.166667 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:18:45.166618 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[networking-console-plugin-cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-297lb" podUID="90976564-8bbb-407b-a345-f362c0c02c2d" Apr 16 18:18:45.234203 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:18:45.234175 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[metrics-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-dns/dns-default-gd5b4" podUID="be0027a7-e3ae-4c79-8020-883f6b6eda09" Apr 16 18:18:45.246353 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:18:45.246317 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-ingress-canary/ingress-canary-tblwn" podUID="63acc75e-52de-45b3-a91a-8c41889d9a55" Apr 16 18:18:45.274104 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:18:45.274076 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-297lb" Apr 16 18:18:45.274222 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:18:45.274076 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-tblwn" Apr 16 18:18:45.274292 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:18:45.274085 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" Apr 16 18:18:45.274348 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:18:45.274086 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gd5b4" Apr 16 18:18:45.795192 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:18:45.795158 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[metrics-certs], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-multus/network-metrics-daemon-hgcdt" podUID="5af0e6ec-389a-47dd-afc0-725b505e4635" Apr 16 18:18:49.592019 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:18:49.591996 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-svsjw_c77bc112-2094-4908-98e9-9722eea678f2/dns-node-resolver/0.log" Apr 16 18:18:50.113326 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:18:50.113295 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-registry-tls\") pod \"image-registry-844ff7bf89-wscd9\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" Apr 16 18:18:50.113498 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:18:50.113412 2570 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 18:18:50.113498 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:18:50.113423 2570 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-844ff7bf89-wscd9: secret "image-registry-tls" not found Apr 16 18:18:50.113498 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:18:50.113473 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-registry-tls podName:d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd nodeName:}" failed. No retries permitted until 2026-04-16 18:20:52.113460084 +0000 UTC m=+282.919212862 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-registry-tls") pod "image-registry-844ff7bf89-wscd9" (UID: "d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd") : secret "image-registry-tls" not found Apr 16 18:18:50.213797 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:18:50.213762 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/be0027a7-e3ae-4c79-8020-883f6b6eda09-metrics-tls\") pod \"dns-default-gd5b4\" (UID: \"be0027a7-e3ae-4c79-8020-883f6b6eda09\") " pod="openshift-dns/dns-default-gd5b4" Apr 16 18:18:50.213963 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:18:50.213815 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/90976564-8bbb-407b-a345-f362c0c02c2d-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-297lb\" (UID: \"90976564-8bbb-407b-a345-f362c0c02c2d\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-297lb" Apr 16 18:18:50.213963 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:18:50.213871 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/63acc75e-52de-45b3-a91a-8c41889d9a55-cert\") pod \"ingress-canary-tblwn\" (UID: \"63acc75e-52de-45b3-a91a-8c41889d9a55\") " pod="openshift-ingress-canary/ingress-canary-tblwn" Apr 16 18:18:50.213963 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:18:50.213912 2570 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 18:18:50.214068 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:18:50.213982 2570 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 16 18:18:50.214068 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:18:50.213995 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be0027a7-e3ae-4c79-8020-883f6b6eda09-metrics-tls podName:be0027a7-e3ae-4c79-8020-883f6b6eda09 nodeName:}" failed. No retries permitted until 2026-04-16 18:20:52.21397968 +0000 UTC m=+283.019732464 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/be0027a7-e3ae-4c79-8020-883f6b6eda09-metrics-tls") pod "dns-default-gd5b4" (UID: "be0027a7-e3ae-4c79-8020-883f6b6eda09") : secret "dns-default-metrics-tls" not found Apr 16 18:18:50.214068 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:18:50.214028 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90976564-8bbb-407b-a345-f362c0c02c2d-networking-console-plugin-cert podName:90976564-8bbb-407b-a345-f362c0c02c2d nodeName:}" failed. No retries permitted until 2026-04-16 18:20:52.214016483 +0000 UTC m=+283.019769267 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/90976564-8bbb-407b-a345-f362c0c02c2d-networking-console-plugin-cert") pod "networking-console-plugin-5cb6cf4cb4-297lb" (UID: "90976564-8bbb-407b-a345-f362c0c02c2d") : secret "networking-console-plugin-cert" not found Apr 16 18:18:50.214068 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:18:50.213982 2570 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 18:18:50.214068 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:18:50.214059 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63acc75e-52de-45b3-a91a-8c41889d9a55-cert podName:63acc75e-52de-45b3-a91a-8c41889d9a55 nodeName:}" failed. No retries permitted until 2026-04-16 18:20:52.21404933 +0000 UTC m=+283.019802124 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/63acc75e-52de-45b3-a91a-8c41889d9a55-cert") pod "ingress-canary-tblwn" (UID: "63acc75e-52de-45b3-a91a-8c41889d9a55") : secret "canary-serving-cert" not found Apr 16 18:18:50.392334 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:18:50.392307 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-d4t7h_fbc7bbf3-3e05-4fdd-ad20-de2ed4f13e8d/node-ca/0.log" Apr 16 18:18:53.002174 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:18:53.002119 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-75cc7fdf4-7cnk6" podUID="804432d4-ca03-499d-9807-b4521b7b7b63" containerName="acm-agent" probeResult="failure" output="Get \"http://10.132.0.11:8000/readyz\": dial tcp 10.132.0.11:8000: connect: connection refused" Apr 16 18:18:53.295028 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:18:53.294947 2570 generic.go:358] "Generic (PLEG): container finished" podID="1d10439d-2d76-4276-9780-19e404a46b29" containerID="21c33fd6aae192e2f0751f03360837743a839516cdeb95b50ecc2a89a9923f55" exitCode=255 Apr 16 18:18:53.295028 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:18:53.295013 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-56ddbf785c-smgfg" event={"ID":"1d10439d-2d76-4276-9780-19e404a46b29","Type":"ContainerDied","Data":"21c33fd6aae192e2f0751f03360837743a839516cdeb95b50ecc2a89a9923f55"} Apr 16 18:18:53.295388 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:18:53.295362 2570 scope.go:117] "RemoveContainer" containerID="21c33fd6aae192e2f0751f03360837743a839516cdeb95b50ecc2a89a9923f55" Apr 16 18:18:53.296262 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:18:53.296222 2570 generic.go:358] "Generic (PLEG): container finished" podID="804432d4-ca03-499d-9807-b4521b7b7b63" containerID="732606d6336902413cc48e594244406ccbeb4efe821ce5c9cda9e53ac23f7fe3" exitCode=1 Apr 16 18:18:53.296351 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:18:53.296292 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-75cc7fdf4-7cnk6" event={"ID":"804432d4-ca03-499d-9807-b4521b7b7b63","Type":"ContainerDied","Data":"732606d6336902413cc48e594244406ccbeb4efe821ce5c9cda9e53ac23f7fe3"} Apr 16 18:18:53.296585 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:18:53.296570 2570 scope.go:117] "RemoveContainer" containerID="732606d6336902413cc48e594244406ccbeb4efe821ce5c9cda9e53ac23f7fe3" Apr 16 18:18:54.300112 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:18:54.300075 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-56ddbf785c-smgfg" event={"ID":"1d10439d-2d76-4276-9780-19e404a46b29","Type":"ContainerStarted","Data":"77d2d906149552580f7df84f9f20d44ddece368eb0ae80981a69d0510bdc38d8"} Apr 16 18:18:54.301552 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:18:54.301533 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-75cc7fdf4-7cnk6" event={"ID":"804432d4-ca03-499d-9807-b4521b7b7b63","Type":"ContainerStarted","Data":"c9c20d581d5519cd467728012bd821cd118ed14844a1cb53bacfb6e036f1dd8b"} Apr 16 18:18:54.301799 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:18:54.301781 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-75cc7fdf4-7cnk6" Apr 16 18:18:54.302337 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:18:54.302323 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-75cc7fdf4-7cnk6" Apr 16 18:18:57.777313 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:18:57.777271 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hgcdt" Apr 16 18:19:12.063662 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:12.063588 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-runtime-extractor-9z848"] Apr 16 18:19:12.066023 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:12.065996 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-9z848" Apr 16 18:19:12.068505 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:12.068477 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"openshift-service-ca.crt\"" Apr 16 18:19:12.068505 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:12.068495 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-sa-dockercfg-pnvp2\"" Apr 16 18:19:12.068661 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:12.068491 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-root-ca.crt\"" Apr 16 18:19:12.069187 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:12.069167 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-rbac-proxy\"" Apr 16 18:19:12.069453 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:12.069439 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-tls\"" Apr 16 18:19:12.085813 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:12.085789 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-9z848"] Apr 16 18:19:12.184402 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:12.184372 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/25310a19-0ed5-4bcc-831c-55862fdf6d2f-crio-socket\") pod \"insights-runtime-extractor-9z848\" (UID: \"25310a19-0ed5-4bcc-831c-55862fdf6d2f\") " pod="openshift-insights/insights-runtime-extractor-9z848" Apr 16 18:19:12.184402 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:12.184405 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/25310a19-0ed5-4bcc-831c-55862fdf6d2f-data-volume\") pod \"insights-runtime-extractor-9z848\" (UID: \"25310a19-0ed5-4bcc-831c-55862fdf6d2f\") " pod="openshift-insights/insights-runtime-extractor-9z848" Apr 16 18:19:12.184593 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:12.184426 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/25310a19-0ed5-4bcc-831c-55862fdf6d2f-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-9z848\" (UID: \"25310a19-0ed5-4bcc-831c-55862fdf6d2f\") " pod="openshift-insights/insights-runtime-extractor-9z848" Apr 16 18:19:12.184593 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:12.184496 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6hsw\" (UniqueName: \"kubernetes.io/projected/25310a19-0ed5-4bcc-831c-55862fdf6d2f-kube-api-access-l6hsw\") pod \"insights-runtime-extractor-9z848\" (UID: \"25310a19-0ed5-4bcc-831c-55862fdf6d2f\") " pod="openshift-insights/insights-runtime-extractor-9z848" Apr 16 18:19:12.184593 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:12.184573 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/25310a19-0ed5-4bcc-831c-55862fdf6d2f-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-9z848\" (UID: \"25310a19-0ed5-4bcc-831c-55862fdf6d2f\") " pod="openshift-insights/insights-runtime-extractor-9z848" Apr 16 18:19:12.285554 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:12.285519 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/25310a19-0ed5-4bcc-831c-55862fdf6d2f-crio-socket\") pod \"insights-runtime-extractor-9z848\" (UID: \"25310a19-0ed5-4bcc-831c-55862fdf6d2f\") " pod="openshift-insights/insights-runtime-extractor-9z848" Apr 16 18:19:12.285554 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:12.285555 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/25310a19-0ed5-4bcc-831c-55862fdf6d2f-data-volume\") pod \"insights-runtime-extractor-9z848\" (UID: \"25310a19-0ed5-4bcc-831c-55862fdf6d2f\") " pod="openshift-insights/insights-runtime-extractor-9z848" Apr 16 18:19:12.285769 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:12.285631 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/25310a19-0ed5-4bcc-831c-55862fdf6d2f-crio-socket\") pod \"insights-runtime-extractor-9z848\" (UID: \"25310a19-0ed5-4bcc-831c-55862fdf6d2f\") " pod="openshift-insights/insights-runtime-extractor-9z848" Apr 16 18:19:12.285769 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:12.285680 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/25310a19-0ed5-4bcc-831c-55862fdf6d2f-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-9z848\" (UID: \"25310a19-0ed5-4bcc-831c-55862fdf6d2f\") " pod="openshift-insights/insights-runtime-extractor-9z848" Apr 16 18:19:12.285769 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:12.285729 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l6hsw\" (UniqueName: \"kubernetes.io/projected/25310a19-0ed5-4bcc-831c-55862fdf6d2f-kube-api-access-l6hsw\") pod \"insights-runtime-extractor-9z848\" (UID: \"25310a19-0ed5-4bcc-831c-55862fdf6d2f\") " pod="openshift-insights/insights-runtime-extractor-9z848" Apr 16 18:19:12.285933 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:12.285788 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/25310a19-0ed5-4bcc-831c-55862fdf6d2f-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-9z848\" (UID: \"25310a19-0ed5-4bcc-831c-55862fdf6d2f\") " pod="openshift-insights/insights-runtime-extractor-9z848" Apr 16 18:19:12.285978 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:12.285926 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/25310a19-0ed5-4bcc-831c-55862fdf6d2f-data-volume\") pod \"insights-runtime-extractor-9z848\" (UID: \"25310a19-0ed5-4bcc-831c-55862fdf6d2f\") " pod="openshift-insights/insights-runtime-extractor-9z848" Apr 16 18:19:12.286128 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:12.286112 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/25310a19-0ed5-4bcc-831c-55862fdf6d2f-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-9z848\" (UID: \"25310a19-0ed5-4bcc-831c-55862fdf6d2f\") " pod="openshift-insights/insights-runtime-extractor-9z848" Apr 16 18:19:12.287960 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:12.287944 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/25310a19-0ed5-4bcc-831c-55862fdf6d2f-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-9z848\" (UID: \"25310a19-0ed5-4bcc-831c-55862fdf6d2f\") " pod="openshift-insights/insights-runtime-extractor-9z848" Apr 16 18:19:12.294331 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:12.294311 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6hsw\" (UniqueName: \"kubernetes.io/projected/25310a19-0ed5-4bcc-831c-55862fdf6d2f-kube-api-access-l6hsw\") pod \"insights-runtime-extractor-9z848\" (UID: \"25310a19-0ed5-4bcc-831c-55862fdf6d2f\") " pod="openshift-insights/insights-runtime-extractor-9z848" Apr 16 18:19:12.374377 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:12.374351 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-9z848" Apr 16 18:19:12.486042 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:12.486010 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-9z848"] Apr 16 18:19:12.488861 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:19:12.488833 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25310a19_0ed5_4bcc_831c_55862fdf6d2f.slice/crio-f7f4d1b5e139b3a0b45cbabd966e1761e43fd0749197036dee796784e744ccd1 WatchSource:0}: Error finding container f7f4d1b5e139b3a0b45cbabd966e1761e43fd0749197036dee796784e744ccd1: Status 404 returned error can't find the container with id f7f4d1b5e139b3a0b45cbabd966e1761e43fd0749197036dee796784e744ccd1 Apr 16 18:19:13.344270 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:13.344240 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-9z848" event={"ID":"25310a19-0ed5-4bcc-831c-55862fdf6d2f","Type":"ContainerStarted","Data":"e2940d973f3972ca55bf41493eae7b4e3a3623287d2a1978c02c780f3d4f107f"} Apr 16 18:19:13.344270 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:13.344272 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-9z848" event={"ID":"25310a19-0ed5-4bcc-831c-55862fdf6d2f","Type":"ContainerStarted","Data":"102d654d98a64ca6874b918655ee5c6166d3f919fe014f66a4083b981bb2f428"} Apr 16 18:19:13.344622 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:13.344284 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-9z848" event={"ID":"25310a19-0ed5-4bcc-831c-55862fdf6d2f","Type":"ContainerStarted","Data":"f7f4d1b5e139b3a0b45cbabd966e1761e43fd0749197036dee796784e744ccd1"} Apr 16 18:19:15.350771 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:15.350741 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-9z848" event={"ID":"25310a19-0ed5-4bcc-831c-55862fdf6d2f","Type":"ContainerStarted","Data":"057a3c646d39dac701223cf0bc98739455a80b5f6b240da129f7bb32bf1b50c1"} Apr 16 18:19:15.368110 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:15.368069 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-runtime-extractor-9z848" podStartSLOduration=1.306942333 podStartE2EDuration="3.368055686s" podCreationTimestamp="2026-04-16 18:19:12 +0000 UTC" firstStartedPulling="2026-04-16 18:19:12.53344545 +0000 UTC m=+183.339198229" lastFinishedPulling="2026-04-16 18:19:14.594558795 +0000 UTC m=+185.400311582" observedRunningTime="2026-04-16 18:19:15.367707332 +0000 UTC m=+186.173460132" watchObservedRunningTime="2026-04-16 18:19:15.368055686 +0000 UTC m=+186.173808488" Apr 16 18:19:25.434350 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.434320 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-nq9cn"] Apr 16 18:19:25.437708 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.437690 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-nq9cn" Apr 16 18:19:25.440547 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.440522 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"metrics-client-ca\"" Apr 16 18:19:25.441125 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.441102 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"node-exporter-accelerators-collector-config\"" Apr 16 18:19:25.441367 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.441348 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"openshift-service-ca.crt\"" Apr 16 18:19:25.441439 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.441375 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kube-root-ca.crt\"" Apr 16 18:19:25.441485 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.441445 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-tls\"" Apr 16 18:19:25.441640 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.441625 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-dockercfg-gmsbp\"" Apr 16 18:19:25.443448 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.443419 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-kube-rbac-proxy-config\"" Apr 16 18:19:25.482495 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.482463 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/438f800a-174a-40f5-9292-468d97227591-node-exporter-accelerators-collector-config\") pod \"node-exporter-nq9cn\" (UID: \"438f800a-174a-40f5-9292-468d97227591\") " pod="openshift-monitoring/node-exporter-nq9cn" Apr 16 18:19:25.482624 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.482519 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/438f800a-174a-40f5-9292-468d97227591-node-exporter-wtmp\") pod \"node-exporter-nq9cn\" (UID: \"438f800a-174a-40f5-9292-468d97227591\") " pod="openshift-monitoring/node-exporter-nq9cn" Apr 16 18:19:25.482624 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.482574 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/438f800a-174a-40f5-9292-468d97227591-node-exporter-tls\") pod \"node-exporter-nq9cn\" (UID: \"438f800a-174a-40f5-9292-468d97227591\") " pod="openshift-monitoring/node-exporter-nq9cn" Apr 16 18:19:25.482729 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.482650 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/438f800a-174a-40f5-9292-468d97227591-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-nq9cn\" (UID: \"438f800a-174a-40f5-9292-468d97227591\") " pod="openshift-monitoring/node-exporter-nq9cn" Apr 16 18:19:25.482729 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.482714 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/438f800a-174a-40f5-9292-468d97227591-sys\") pod \"node-exporter-nq9cn\" (UID: \"438f800a-174a-40f5-9292-468d97227591\") " pod="openshift-monitoring/node-exporter-nq9cn" Apr 16 18:19:25.482837 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.482751 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/438f800a-174a-40f5-9292-468d97227591-root\") pod \"node-exporter-nq9cn\" (UID: \"438f800a-174a-40f5-9292-468d97227591\") " pod="openshift-monitoring/node-exporter-nq9cn" Apr 16 18:19:25.482837 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.482783 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/438f800a-174a-40f5-9292-468d97227591-metrics-client-ca\") pod \"node-exporter-nq9cn\" (UID: \"438f800a-174a-40f5-9292-468d97227591\") " pod="openshift-monitoring/node-exporter-nq9cn" Apr 16 18:19:25.482917 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.482837 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxnpv\" (UniqueName: \"kubernetes.io/projected/438f800a-174a-40f5-9292-468d97227591-kube-api-access-mxnpv\") pod \"node-exporter-nq9cn\" (UID: \"438f800a-174a-40f5-9292-468d97227591\") " pod="openshift-monitoring/node-exporter-nq9cn" Apr 16 18:19:25.482917 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.482896 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/438f800a-174a-40f5-9292-468d97227591-node-exporter-textfile\") pod \"node-exporter-nq9cn\" (UID: \"438f800a-174a-40f5-9292-468d97227591\") " pod="openshift-monitoring/node-exporter-nq9cn" Apr 16 18:19:25.584034 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.583994 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/438f800a-174a-40f5-9292-468d97227591-node-exporter-accelerators-collector-config\") pod \"node-exporter-nq9cn\" (UID: \"438f800a-174a-40f5-9292-468d97227591\") " pod="openshift-monitoring/node-exporter-nq9cn" Apr 16 18:19:25.584207 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.584048 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/438f800a-174a-40f5-9292-468d97227591-node-exporter-wtmp\") pod \"node-exporter-nq9cn\" (UID: \"438f800a-174a-40f5-9292-468d97227591\") " pod="openshift-monitoring/node-exporter-nq9cn" Apr 16 18:19:25.584207 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.584146 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/438f800a-174a-40f5-9292-468d97227591-node-exporter-tls\") pod \"node-exporter-nq9cn\" (UID: \"438f800a-174a-40f5-9292-468d97227591\") " pod="openshift-monitoring/node-exporter-nq9cn" Apr 16 18:19:25.584207 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.584190 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/438f800a-174a-40f5-9292-468d97227591-node-exporter-wtmp\") pod \"node-exporter-nq9cn\" (UID: \"438f800a-174a-40f5-9292-468d97227591\") " pod="openshift-monitoring/node-exporter-nq9cn" Apr 16 18:19:25.584362 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.584190 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/438f800a-174a-40f5-9292-468d97227591-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-nq9cn\" (UID: \"438f800a-174a-40f5-9292-468d97227591\") " pod="openshift-monitoring/node-exporter-nq9cn" Apr 16 18:19:25.584362 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.584274 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/438f800a-174a-40f5-9292-468d97227591-sys\") pod \"node-exporter-nq9cn\" (UID: \"438f800a-174a-40f5-9292-468d97227591\") " pod="openshift-monitoring/node-exporter-nq9cn" Apr 16 18:19:25.584362 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.584306 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/438f800a-174a-40f5-9292-468d97227591-root\") pod \"node-exporter-nq9cn\" (UID: \"438f800a-174a-40f5-9292-468d97227591\") " pod="openshift-monitoring/node-exporter-nq9cn" Apr 16 18:19:25.584362 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.584323 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/438f800a-174a-40f5-9292-468d97227591-sys\") pod \"node-exporter-nq9cn\" (UID: \"438f800a-174a-40f5-9292-468d97227591\") " pod="openshift-monitoring/node-exporter-nq9cn" Apr 16 18:19:25.584362 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.584333 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/438f800a-174a-40f5-9292-468d97227591-metrics-client-ca\") pod \"node-exporter-nq9cn\" (UID: \"438f800a-174a-40f5-9292-468d97227591\") " pod="openshift-monitoring/node-exporter-nq9cn" Apr 16 18:19:25.584595 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.584413 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mxnpv\" (UniqueName: \"kubernetes.io/projected/438f800a-174a-40f5-9292-468d97227591-kube-api-access-mxnpv\") pod \"node-exporter-nq9cn\" (UID: \"438f800a-174a-40f5-9292-468d97227591\") " pod="openshift-monitoring/node-exporter-nq9cn" Apr 16 18:19:25.584595 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.584423 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/438f800a-174a-40f5-9292-468d97227591-root\") pod \"node-exporter-nq9cn\" (UID: \"438f800a-174a-40f5-9292-468d97227591\") " pod="openshift-monitoring/node-exporter-nq9cn" Apr 16 18:19:25.584595 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.584462 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/438f800a-174a-40f5-9292-468d97227591-node-exporter-textfile\") pod \"node-exporter-nq9cn\" (UID: \"438f800a-174a-40f5-9292-468d97227591\") " pod="openshift-monitoring/node-exporter-nq9cn" Apr 16 18:19:25.584858 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.584706 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/438f800a-174a-40f5-9292-468d97227591-node-exporter-accelerators-collector-config\") pod \"node-exporter-nq9cn\" (UID: \"438f800a-174a-40f5-9292-468d97227591\") " pod="openshift-monitoring/node-exporter-nq9cn" Apr 16 18:19:25.584858 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.584757 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/438f800a-174a-40f5-9292-468d97227591-node-exporter-textfile\") pod \"node-exporter-nq9cn\" (UID: \"438f800a-174a-40f5-9292-468d97227591\") " pod="openshift-monitoring/node-exporter-nq9cn" Apr 16 18:19:25.584858 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.584810 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/438f800a-174a-40f5-9292-468d97227591-metrics-client-ca\") pod \"node-exporter-nq9cn\" (UID: \"438f800a-174a-40f5-9292-468d97227591\") " pod="openshift-monitoring/node-exporter-nq9cn" Apr 16 18:19:25.586574 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.586552 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/438f800a-174a-40f5-9292-468d97227591-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-nq9cn\" (UID: \"438f800a-174a-40f5-9292-468d97227591\") " pod="openshift-monitoring/node-exporter-nq9cn" Apr 16 18:19:25.586660 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.586639 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/438f800a-174a-40f5-9292-468d97227591-node-exporter-tls\") pod \"node-exporter-nq9cn\" (UID: \"438f800a-174a-40f5-9292-468d97227591\") " pod="openshift-monitoring/node-exporter-nq9cn" Apr 16 18:19:25.591586 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.591563 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxnpv\" (UniqueName: \"kubernetes.io/projected/438f800a-174a-40f5-9292-468d97227591-kube-api-access-mxnpv\") pod \"node-exporter-nq9cn\" (UID: \"438f800a-174a-40f5-9292-468d97227591\") " pod="openshift-monitoring/node-exporter-nq9cn" Apr 16 18:19:25.746635 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:25.746553 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-nq9cn" Apr 16 18:19:25.754432 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:19:25.754405 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod438f800a_174a_40f5_9292_468d97227591.slice/crio-5c803d25c21acd3786dd54d1a079a18a917457dbe47b5c1adc8a662f52b29030 WatchSource:0}: Error finding container 5c803d25c21acd3786dd54d1a079a18a917457dbe47b5c1adc8a662f52b29030: Status 404 returned error can't find the container with id 5c803d25c21acd3786dd54d1a079a18a917457dbe47b5c1adc8a662f52b29030 Apr 16 18:19:26.377317 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:26.377262 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-nq9cn" event={"ID":"438f800a-174a-40f5-9292-468d97227591","Type":"ContainerStarted","Data":"5c803d25c21acd3786dd54d1a079a18a917457dbe47b5c1adc8a662f52b29030"} Apr 16 18:19:27.381206 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:27.381173 2570 generic.go:358] "Generic (PLEG): container finished" podID="438f800a-174a-40f5-9292-468d97227591" containerID="98549f60dd53ad4ecacfb8fc16e182c2cd8e3ee0309ba823592b9bf85f926666" exitCode=0 Apr 16 18:19:27.381594 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:27.381252 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-nq9cn" event={"ID":"438f800a-174a-40f5-9292-468d97227591","Type":"ContainerDied","Data":"98549f60dd53ad4ecacfb8fc16e182c2cd8e3ee0309ba823592b9bf85f926666"} Apr 16 18:19:28.385617 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:28.385583 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-nq9cn" event={"ID":"438f800a-174a-40f5-9292-468d97227591","Type":"ContainerStarted","Data":"4770364cc579d711dd7c545b57f6578887d51bdaeb647c241f873a82d3e381f0"} Apr 16 18:19:28.385617 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:28.385618 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-nq9cn" event={"ID":"438f800a-174a-40f5-9292-468d97227591","Type":"ContainerStarted","Data":"7445136f3f20f7bc8acef2650f1600dfc75700862b0989c86bfa9bc2d11c1835"} Apr 16 18:19:32.514213 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:32.514168 2570 prober.go:120] "Probe failed" probeType="Liveness" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-d96fff796-nlgnl" podUID="d3e94ba9-6d0d-433f-bf55-90ff5b6dd994" containerName="service-proxy" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 16 18:19:34.212795 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:34.212740 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-nq9cn" podStartSLOduration=8.524245013 podStartE2EDuration="9.212722937s" podCreationTimestamp="2026-04-16 18:19:25 +0000 UTC" firstStartedPulling="2026-04-16 18:19:25.756284443 +0000 UTC m=+196.562037221" lastFinishedPulling="2026-04-16 18:19:26.444762366 +0000 UTC m=+197.250515145" observedRunningTime="2026-04-16 18:19:28.406492589 +0000 UTC m=+199.212245391" watchObservedRunningTime="2026-04-16 18:19:34.212722937 +0000 UTC m=+205.018475738" Apr 16 18:19:34.213165 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:34.212949 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-844ff7bf89-wscd9"] Apr 16 18:19:34.213165 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:19:34.213123 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[registry-tls], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" podUID="d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd" Apr 16 18:19:34.399617 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:34.399588 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" Apr 16 18:19:34.403704 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:34.403684 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" Apr 16 18:19:34.458621 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:34.458594 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nc4b5\" (UniqueName: \"kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-kube-api-access-nc4b5\") pod \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " Apr 16 18:19:34.458621 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:34.458623 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-registry-certificates\") pod \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " Apr 16 18:19:34.458780 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:34.458643 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-trusted-ca\") pod \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " Apr 16 18:19:34.458780 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:34.458692 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-image-registry-private-configuration\") pod \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " Apr 16 18:19:34.458780 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:34.458734 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-bound-sa-token\") pod \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " Apr 16 18:19:34.458780 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:34.458762 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-installation-pull-secrets\") pod \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " Apr 16 18:19:34.458948 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:34.458792 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-ca-trust-extracted\") pod \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\" (UID: \"d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd\") " Apr 16 18:19:34.459153 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:34.459127 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd" (UID: "d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 18:19:34.459270 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:34.459167 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd" (UID: "d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 18:19:34.459270 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:34.459168 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd" (UID: "d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 18:19:34.461124 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:34.461099 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd" (UID: "d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 18:19:34.461252 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:34.461181 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd" (UID: "d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 18:19:34.461252 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:34.461197 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-image-registry-private-configuration" (OuterVolumeSpecName: "image-registry-private-configuration") pod "d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd" (UID: "d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd"). InnerVolumeSpecName "image-registry-private-configuration". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 18:19:34.461334 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:34.461263 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-kube-api-access-nc4b5" (OuterVolumeSpecName: "kube-api-access-nc4b5") pod "d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd" (UID: "d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd"). InnerVolumeSpecName "kube-api-access-nc4b5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 18:19:34.559945 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:34.559916 2570 reconciler_common.go:299] "Volume detached for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-image-registry-private-configuration\") on node \"ip-10-0-141-219.ec2.internal\" DevicePath \"\"" Apr 16 18:19:34.559945 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:34.559941 2570 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-bound-sa-token\") on node \"ip-10-0-141-219.ec2.internal\" DevicePath \"\"" Apr 16 18:19:34.559945 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:34.559950 2570 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-installation-pull-secrets\") on node \"ip-10-0-141-219.ec2.internal\" DevicePath \"\"" Apr 16 18:19:34.560137 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:34.559959 2570 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-ca-trust-extracted\") on node \"ip-10-0-141-219.ec2.internal\" DevicePath \"\"" Apr 16 18:19:34.560137 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:34.559969 2570 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nc4b5\" (UniqueName: \"kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-kube-api-access-nc4b5\") on node \"ip-10-0-141-219.ec2.internal\" DevicePath \"\"" Apr 16 18:19:34.560137 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:34.559978 2570 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-registry-certificates\") on node \"ip-10-0-141-219.ec2.internal\" DevicePath \"\"" Apr 16 18:19:34.560137 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:34.559987 2570 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-trusted-ca\") on node \"ip-10-0-141-219.ec2.internal\" DevicePath \"\"" Apr 16 18:19:35.401730 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:35.401697 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-844ff7bf89-wscd9" Apr 16 18:19:35.433419 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:35.433388 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-844ff7bf89-wscd9"] Apr 16 18:19:35.440032 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:35.440005 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-844ff7bf89-wscd9"] Apr 16 18:19:35.567631 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:35.567597 2570 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd-registry-tls\") on node \"ip-10-0-141-219.ec2.internal\" DevicePath \"\"" Apr 16 18:19:35.781040 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:35.780964 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd" path="/var/lib/kubelet/pods/d15ca9d1-c24d-4dcd-9b66-2d7adc9e2fbd/volumes" Apr 16 18:19:42.513561 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:42.513517 2570 prober.go:120] "Probe failed" probeType="Liveness" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-d96fff796-nlgnl" podUID="d3e94ba9-6d0d-433f-bf55-90ff5b6dd994" containerName="service-proxy" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 16 18:19:52.514090 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:52.514050 2570 prober.go:120] "Probe failed" probeType="Liveness" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-d96fff796-nlgnl" podUID="d3e94ba9-6d0d-433f-bf55-90ff5b6dd994" containerName="service-proxy" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 16 18:19:52.514490 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:52.514137 2570 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-d96fff796-nlgnl" Apr 16 18:19:52.514610 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:52.514580 2570 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="service-proxy" containerStatusID={"Type":"cri-o","ID":"f3090bcbc874f499407f8609a04f4ae4a6a00305b0315f6c81d950fde30d0649"} pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-d96fff796-nlgnl" containerMessage="Container service-proxy failed liveness probe, will be restarted" Apr 16 18:19:52.514660 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:52.514643 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-d96fff796-nlgnl" podUID="d3e94ba9-6d0d-433f-bf55-90ff5b6dd994" containerName="service-proxy" containerID="cri-o://f3090bcbc874f499407f8609a04f4ae4a6a00305b0315f6c81d950fde30d0649" gracePeriod=30 Apr 16 18:19:53.445082 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:53.445043 2570 generic.go:358] "Generic (PLEG): container finished" podID="d3e94ba9-6d0d-433f-bf55-90ff5b6dd994" containerID="f3090bcbc874f499407f8609a04f4ae4a6a00305b0315f6c81d950fde30d0649" exitCode=2 Apr 16 18:19:53.445256 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:53.445124 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-d96fff796-nlgnl" event={"ID":"d3e94ba9-6d0d-433f-bf55-90ff5b6dd994","Type":"ContainerDied","Data":"f3090bcbc874f499407f8609a04f4ae4a6a00305b0315f6c81d950fde30d0649"} Apr 16 18:19:53.445256 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:19:53.445168 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-d96fff796-nlgnl" event={"ID":"d3e94ba9-6d0d-433f-bf55-90ff5b6dd994","Type":"ContainerStarted","Data":"de2ec86502bd02250c8dda5f3955460095dbc648e2dbbc419868f3e65a9797b9"} Apr 16 18:20:21.517901 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:21.517864 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5af0e6ec-389a-47dd-afc0-725b505e4635-metrics-certs\") pod \"network-metrics-daemon-hgcdt\" (UID: \"5af0e6ec-389a-47dd-afc0-725b505e4635\") " pod="openshift-multus/network-metrics-daemon-hgcdt" Apr 16 18:20:21.520092 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:21.520072 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5af0e6ec-389a-47dd-afc0-725b505e4635-metrics-certs\") pod \"network-metrics-daemon-hgcdt\" (UID: \"5af0e6ec-389a-47dd-afc0-725b505e4635\") " pod="openshift-multus/network-metrics-daemon-hgcdt" Apr 16 18:20:21.780956 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:21.780884 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-pwdr4\"" Apr 16 18:20:21.788872 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:21.788857 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hgcdt" Apr 16 18:20:21.896684 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:21.896653 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hgcdt"] Apr 16 18:20:21.899615 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:20:21.899589 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5af0e6ec_389a_47dd_afc0_725b505e4635.slice/crio-5eceb289ed4544c67a72218436cc56db5180ff15f4f418538640efd1dc468662 WatchSource:0}: Error finding container 5eceb289ed4544c67a72218436cc56db5180ff15f4f418538640efd1dc468662: Status 404 returned error can't find the container with id 5eceb289ed4544c67a72218436cc56db5180ff15f4f418538640efd1dc468662 Apr 16 18:20:22.515508 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:22.515463 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hgcdt" event={"ID":"5af0e6ec-389a-47dd-afc0-725b505e4635","Type":"ContainerStarted","Data":"5eceb289ed4544c67a72218436cc56db5180ff15f4f418538640efd1dc468662"} Apr 16 18:20:23.520095 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:23.520062 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hgcdt" event={"ID":"5af0e6ec-389a-47dd-afc0-725b505e4635","Type":"ContainerStarted","Data":"48f59b3d26282111daa5b51d0b280648d30fdd4592b13a5d18e0d7efe02670e4"} Apr 16 18:20:23.520095 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:23.520099 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hgcdt" event={"ID":"5af0e6ec-389a-47dd-afc0-725b505e4635","Type":"ContainerStarted","Data":"13f626b2a5d164b8cbf2dc290734dd03b3fbf184b49a7fd76c0965ac0d5875aa"} Apr 16 18:20:23.546370 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:23.546323 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-hgcdt" podStartSLOduration=253.638012391 podStartE2EDuration="4m14.546307339s" podCreationTimestamp="2026-04-16 18:16:09 +0000 UTC" firstStartedPulling="2026-04-16 18:20:21.901354829 +0000 UTC m=+252.707107608" lastFinishedPulling="2026-04-16 18:20:22.809649771 +0000 UTC m=+253.615402556" observedRunningTime="2026-04-16 18:20:23.544906562 +0000 UTC m=+254.350659367" watchObservedRunningTime="2026-04-16 18:20:23.546307339 +0000 UTC m=+254.352060199" Apr 16 18:20:48.275238 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:20:48.275192 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[networking-console-plugin-cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-297lb" podUID="90976564-8bbb-407b-a345-f362c0c02c2d" Apr 16 18:20:48.275664 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:20:48.275199 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-ingress-canary/ingress-canary-tblwn" podUID="63acc75e-52de-45b3-a91a-8c41889d9a55" Apr 16 18:20:48.275664 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:20:48.275199 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[metrics-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-dns/dns-default-gd5b4" podUID="be0027a7-e3ae-4c79-8020-883f6b6eda09" Apr 16 18:20:48.580189 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:48.580108 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-297lb" Apr 16 18:20:48.580382 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:48.580254 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gd5b4" Apr 16 18:20:48.580444 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:48.580385 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-tblwn" Apr 16 18:20:52.232124 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:52.232074 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/63acc75e-52de-45b3-a91a-8c41889d9a55-cert\") pod \"ingress-canary-tblwn\" (UID: \"63acc75e-52de-45b3-a91a-8c41889d9a55\") " pod="openshift-ingress-canary/ingress-canary-tblwn" Apr 16 18:20:52.232558 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:52.232142 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/be0027a7-e3ae-4c79-8020-883f6b6eda09-metrics-tls\") pod \"dns-default-gd5b4\" (UID: \"be0027a7-e3ae-4c79-8020-883f6b6eda09\") " pod="openshift-dns/dns-default-gd5b4" Apr 16 18:20:52.232558 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:52.232175 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/90976564-8bbb-407b-a345-f362c0c02c2d-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-297lb\" (UID: \"90976564-8bbb-407b-a345-f362c0c02c2d\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-297lb" Apr 16 18:20:52.234553 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:52.234518 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/be0027a7-e3ae-4c79-8020-883f6b6eda09-metrics-tls\") pod \"dns-default-gd5b4\" (UID: \"be0027a7-e3ae-4c79-8020-883f6b6eda09\") " pod="openshift-dns/dns-default-gd5b4" Apr 16 18:20:52.234691 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:52.234614 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/90976564-8bbb-407b-a345-f362c0c02c2d-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-297lb\" (UID: \"90976564-8bbb-407b-a345-f362c0c02c2d\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-297lb" Apr 16 18:20:52.234691 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:52.234653 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/63acc75e-52de-45b3-a91a-8c41889d9a55-cert\") pod \"ingress-canary-tblwn\" (UID: \"63acc75e-52de-45b3-a91a-8c41889d9a55\") " pod="openshift-ingress-canary/ingress-canary-tblwn" Apr 16 18:20:52.483633 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:52.483549 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-4b92n\"" Apr 16 18:20:52.483633 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:52.483550 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"default-dockercfg-cdbck\"" Apr 16 18:20:52.483633 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:52.483549 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-gz4g7\"" Apr 16 18:20:52.491346 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:52.491327 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-tblwn" Apr 16 18:20:52.491436 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:52.491347 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gd5b4" Apr 16 18:20:52.491436 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:52.491431 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-297lb" Apr 16 18:20:52.635396 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:52.635348 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-tblwn"] Apr 16 18:20:52.639031 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:20:52.639002 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63acc75e_52de_45b3_a91a_8c41889d9a55.slice/crio-583af9736504c724f0cf44758366868418d29a4c8a17b5243b5ca6d7ea52cc21 WatchSource:0}: Error finding container 583af9736504c724f0cf44758366868418d29a4c8a17b5243b5ca6d7ea52cc21: Status 404 returned error can't find the container with id 583af9736504c724f0cf44758366868418d29a4c8a17b5243b5ca6d7ea52cc21 Apr 16 18:20:52.855431 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:52.855384 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-gd5b4"] Apr 16 18:20:52.856288 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:52.856266 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-5cb6cf4cb4-297lb"] Apr 16 18:20:52.857761 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:20:52.857737 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe0027a7_e3ae_4c79_8020_883f6b6eda09.slice/crio-fb316b8ad85ecea435105ba3d827de7d84fd9a259d19e7ff20c0f55474d1e751 WatchSource:0}: Error finding container fb316b8ad85ecea435105ba3d827de7d84fd9a259d19e7ff20c0f55474d1e751: Status 404 returned error can't find the container with id fb316b8ad85ecea435105ba3d827de7d84fd9a259d19e7ff20c0f55474d1e751 Apr 16 18:20:52.858450 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:20:52.858430 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90976564_8bbb_407b_a345_f362c0c02c2d.slice/crio-072a98ccae2a5a2763c1ab6d8762c0dc7352b5b908f21cc8a1631a7b0fca77ce WatchSource:0}: Error finding container 072a98ccae2a5a2763c1ab6d8762c0dc7352b5b908f21cc8a1631a7b0fca77ce: Status 404 returned error can't find the container with id 072a98ccae2a5a2763c1ab6d8762c0dc7352b5b908f21cc8a1631a7b0fca77ce Apr 16 18:20:53.604299 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:53.604254 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-tblwn" event={"ID":"63acc75e-52de-45b3-a91a-8c41889d9a55","Type":"ContainerStarted","Data":"583af9736504c724f0cf44758366868418d29a4c8a17b5243b5ca6d7ea52cc21"} Apr 16 18:20:53.605990 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:53.605949 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gd5b4" event={"ID":"be0027a7-e3ae-4c79-8020-883f6b6eda09","Type":"ContainerStarted","Data":"fb316b8ad85ecea435105ba3d827de7d84fd9a259d19e7ff20c0f55474d1e751"} Apr 16 18:20:53.607638 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:53.607604 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-297lb" event={"ID":"90976564-8bbb-407b-a345-f362c0c02c2d","Type":"ContainerStarted","Data":"072a98ccae2a5a2763c1ab6d8762c0dc7352b5b908f21cc8a1631a7b0fca77ce"} Apr 16 18:20:55.614922 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:55.614886 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-tblwn" event={"ID":"63acc75e-52de-45b3-a91a-8c41889d9a55","Type":"ContainerStarted","Data":"12c04efc7d648477098780dd236eac668c45b4038a7dba2470fdb845c647528a"} Apr 16 18:20:55.616463 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:55.616432 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gd5b4" event={"ID":"be0027a7-e3ae-4c79-8020-883f6b6eda09","Type":"ContainerStarted","Data":"72021b3d79532c006bbb291092a5e087182024941e919a6ed2b26a19a1c699f0"} Apr 16 18:20:55.616587 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:55.616467 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gd5b4" event={"ID":"be0027a7-e3ae-4c79-8020-883f6b6eda09","Type":"ContainerStarted","Data":"d841c9586fdc8df195fc89c5e039c5b984aa7b592df50b0f9d14c71714c34396"} Apr 16 18:20:55.616587 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:55.616486 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-gd5b4" Apr 16 18:20:55.617679 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:55.617658 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-297lb" event={"ID":"90976564-8bbb-407b-a345-f362c0c02c2d","Type":"ContainerStarted","Data":"f191af57efdd96f56dac423d95be98af21b40045a2ea2b73a48a1d5921bea813"} Apr 16 18:20:55.633942 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:55.633895 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-tblwn" podStartSLOduration=251.497605352 podStartE2EDuration="4m13.633881764s" podCreationTimestamp="2026-04-16 18:16:42 +0000 UTC" firstStartedPulling="2026-04-16 18:20:52.641500511 +0000 UTC m=+283.447253301" lastFinishedPulling="2026-04-16 18:20:54.777776934 +0000 UTC m=+285.583529713" observedRunningTime="2026-04-16 18:20:55.632925104 +0000 UTC m=+286.438677904" watchObservedRunningTime="2026-04-16 18:20:55.633881764 +0000 UTC m=+286.439634556" Apr 16 18:20:55.650473 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:55.650421 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-gd5b4" podStartSLOduration=251.731865181 podStartE2EDuration="4m13.650407322s" podCreationTimestamp="2026-04-16 18:16:42 +0000 UTC" firstStartedPulling="2026-04-16 18:20:52.859875661 +0000 UTC m=+283.665628441" lastFinishedPulling="2026-04-16 18:20:54.778417789 +0000 UTC m=+285.584170582" observedRunningTime="2026-04-16 18:20:55.650306027 +0000 UTC m=+286.456058829" watchObservedRunningTime="2026-04-16 18:20:55.650407322 +0000 UTC m=+286.456160123" Apr 16 18:20:55.667620 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:20:55.667556 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-297lb" podStartSLOduration=268.753109813 podStartE2EDuration="4m30.667538015s" podCreationTimestamp="2026-04-16 18:16:25 +0000 UTC" firstStartedPulling="2026-04-16 18:20:52.8603299 +0000 UTC m=+283.666082679" lastFinishedPulling="2026-04-16 18:20:54.77475809 +0000 UTC m=+285.580510881" observedRunningTime="2026-04-16 18:20:55.666430586 +0000 UTC m=+286.472183387" watchObservedRunningTime="2026-04-16 18:20:55.667538015 +0000 UTC m=+286.473290815" Apr 16 18:21:05.623129 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:21:05.623098 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-gd5b4" Apr 16 18:21:09.666274 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:21:09.666245 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hls95_9eda7e8d-1d99-41d3-acfb-b6c80829811c/ovn-acl-logging/0.log" Apr 16 18:21:09.666687 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:21:09.666515 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hls95_9eda7e8d-1d99-41d3-acfb-b6c80829811c/ovn-acl-logging/0.log" Apr 16 18:21:09.668612 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:21:09.668590 2570 kubelet.go:1628] "Image garbage collection succeeded" Apr 16 18:25:17.965017 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:17.964977 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/kserve-controller-manager-7668d57578-rmx42"] Apr 16 18:25:17.967049 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:17.967029 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-7668d57578-rmx42" Apr 16 18:25:17.970600 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:17.970579 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"kserve-webhook-server-cert\"" Apr 16 18:25:17.971336 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:17.971320 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"openshift-service-ca.crt\"" Apr 16 18:25:17.971402 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:17.971325 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"kube-root-ca.crt\"" Apr 16 18:25:17.971402 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:17.971379 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"kserve-controller-manager-dockercfg-ss6qt\"" Apr 16 18:25:17.997501 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:17.997477 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-7668d57578-rmx42"] Apr 16 18:25:18.029000 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:18.028969 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/seaweedfs-86cc847c5c-mq58k"] Apr 16 18:25:18.030924 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:18.030910 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/seaweedfs-86cc847c5c-mq58k" Apr 16 18:25:18.033547 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:18.033528 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"default-dockercfg-8fqk7\"" Apr 16 18:25:18.033674 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:18.033573 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"mlpipeline-s3-artifact\"" Apr 16 18:25:18.037244 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:18.037188 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/db7b28a3-cfdd-487d-afa4-9b509061bf47-cert\") pod \"kserve-controller-manager-7668d57578-rmx42\" (UID: \"db7b28a3-cfdd-487d-afa4-9b509061bf47\") " pod="kserve/kserve-controller-manager-7668d57578-rmx42" Apr 16 18:25:18.037359 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:18.037244 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfp9f\" (UniqueName: \"kubernetes.io/projected/db7b28a3-cfdd-487d-afa4-9b509061bf47-kube-api-access-rfp9f\") pod \"kserve-controller-manager-7668d57578-rmx42\" (UID: \"db7b28a3-cfdd-487d-afa4-9b509061bf47\") " pod="kserve/kserve-controller-manager-7668d57578-rmx42" Apr 16 18:25:18.040598 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:18.040576 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/seaweedfs-86cc847c5c-mq58k"] Apr 16 18:25:18.138440 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:18.138400 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lblnt\" (UniqueName: \"kubernetes.io/projected/5dd6cdbb-17d6-49da-9a47-c56a45bfda30-kube-api-access-lblnt\") pod \"seaweedfs-86cc847c5c-mq58k\" (UID: \"5dd6cdbb-17d6-49da-9a47-c56a45bfda30\") " pod="kserve/seaweedfs-86cc847c5c-mq58k" Apr 16 18:25:18.138620 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:18.138459 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/db7b28a3-cfdd-487d-afa4-9b509061bf47-cert\") pod \"kserve-controller-manager-7668d57578-rmx42\" (UID: \"db7b28a3-cfdd-487d-afa4-9b509061bf47\") " pod="kserve/kserve-controller-manager-7668d57578-rmx42" Apr 16 18:25:18.138620 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:18.138500 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rfp9f\" (UniqueName: \"kubernetes.io/projected/db7b28a3-cfdd-487d-afa4-9b509061bf47-kube-api-access-rfp9f\") pod \"kserve-controller-manager-7668d57578-rmx42\" (UID: \"db7b28a3-cfdd-487d-afa4-9b509061bf47\") " pod="kserve/kserve-controller-manager-7668d57578-rmx42" Apr 16 18:25:18.138620 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:18.138536 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/5dd6cdbb-17d6-49da-9a47-c56a45bfda30-data\") pod \"seaweedfs-86cc847c5c-mq58k\" (UID: \"5dd6cdbb-17d6-49da-9a47-c56a45bfda30\") " pod="kserve/seaweedfs-86cc847c5c-mq58k" Apr 16 18:25:18.140915 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:18.140893 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/db7b28a3-cfdd-487d-afa4-9b509061bf47-cert\") pod \"kserve-controller-manager-7668d57578-rmx42\" (UID: \"db7b28a3-cfdd-487d-afa4-9b509061bf47\") " pod="kserve/kserve-controller-manager-7668d57578-rmx42" Apr 16 18:25:18.146054 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:18.146034 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfp9f\" (UniqueName: \"kubernetes.io/projected/db7b28a3-cfdd-487d-afa4-9b509061bf47-kube-api-access-rfp9f\") pod \"kserve-controller-manager-7668d57578-rmx42\" (UID: \"db7b28a3-cfdd-487d-afa4-9b509061bf47\") " pod="kserve/kserve-controller-manager-7668d57578-rmx42" Apr 16 18:25:18.239212 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:18.239129 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lblnt\" (UniqueName: \"kubernetes.io/projected/5dd6cdbb-17d6-49da-9a47-c56a45bfda30-kube-api-access-lblnt\") pod \"seaweedfs-86cc847c5c-mq58k\" (UID: \"5dd6cdbb-17d6-49da-9a47-c56a45bfda30\") " pod="kserve/seaweedfs-86cc847c5c-mq58k" Apr 16 18:25:18.239212 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:18.239194 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/5dd6cdbb-17d6-49da-9a47-c56a45bfda30-data\") pod \"seaweedfs-86cc847c5c-mq58k\" (UID: \"5dd6cdbb-17d6-49da-9a47-c56a45bfda30\") " pod="kserve/seaweedfs-86cc847c5c-mq58k" Apr 16 18:25:18.239583 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:18.239559 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/5dd6cdbb-17d6-49da-9a47-c56a45bfda30-data\") pod \"seaweedfs-86cc847c5c-mq58k\" (UID: \"5dd6cdbb-17d6-49da-9a47-c56a45bfda30\") " pod="kserve/seaweedfs-86cc847c5c-mq58k" Apr 16 18:25:18.249875 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:18.249857 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lblnt\" (UniqueName: \"kubernetes.io/projected/5dd6cdbb-17d6-49da-9a47-c56a45bfda30-kube-api-access-lblnt\") pod \"seaweedfs-86cc847c5c-mq58k\" (UID: \"5dd6cdbb-17d6-49da-9a47-c56a45bfda30\") " pod="kserve/seaweedfs-86cc847c5c-mq58k" Apr 16 18:25:18.276295 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:18.276252 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-7668d57578-rmx42" Apr 16 18:25:18.339861 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:18.339834 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/seaweedfs-86cc847c5c-mq58k" Apr 16 18:25:18.393411 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:18.393365 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-7668d57578-rmx42"] Apr 16 18:25:18.397398 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:25:18.397346 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb7b28a3_cfdd_487d_afa4_9b509061bf47.slice/crio-68312c9b1f42b746f98dfec55e92f220ec8e0f5605e7ef3962c695e9c47b2c97 WatchSource:0}: Error finding container 68312c9b1f42b746f98dfec55e92f220ec8e0f5605e7ef3962c695e9c47b2c97: Status 404 returned error can't find the container with id 68312c9b1f42b746f98dfec55e92f220ec8e0f5605e7ef3962c695e9c47b2c97 Apr 16 18:25:18.398845 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:18.398826 2570 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 18:25:18.463758 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:18.463730 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/seaweedfs-86cc847c5c-mq58k"] Apr 16 18:25:18.467136 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:25:18.467104 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dd6cdbb_17d6_49da_9a47_c56a45bfda30.slice/crio-fc74bfdd535ef9fc7644c9a721b82beb898a46da2a8e5db50519ccafde0f39a6 WatchSource:0}: Error finding container fc74bfdd535ef9fc7644c9a721b82beb898a46da2a8e5db50519ccafde0f39a6: Status 404 returned error can't find the container with id fc74bfdd535ef9fc7644c9a721b82beb898a46da2a8e5db50519ccafde0f39a6 Apr 16 18:25:19.280661 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:19.280606 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/seaweedfs-86cc847c5c-mq58k" event={"ID":"5dd6cdbb-17d6-49da-9a47-c56a45bfda30","Type":"ContainerStarted","Data":"fc74bfdd535ef9fc7644c9a721b82beb898a46da2a8e5db50519ccafde0f39a6"} Apr 16 18:25:19.282325 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:19.282283 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-7668d57578-rmx42" event={"ID":"db7b28a3-cfdd-487d-afa4-9b509061bf47","Type":"ContainerStarted","Data":"68312c9b1f42b746f98dfec55e92f220ec8e0f5605e7ef3962c695e9c47b2c97"} Apr 16 18:25:22.292795 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:22.292757 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/seaweedfs-86cc847c5c-mq58k" event={"ID":"5dd6cdbb-17d6-49da-9a47-c56a45bfda30","Type":"ContainerStarted","Data":"ee980c9b5b7f8f419ec4dbee16c6bb1cb46f15602a6abc2d0fb39be7d9d65003"} Apr 16 18:25:22.293247 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:22.292953 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/seaweedfs-86cc847c5c-mq58k" Apr 16 18:25:22.294047 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:22.294025 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-7668d57578-rmx42" event={"ID":"db7b28a3-cfdd-487d-afa4-9b509061bf47","Type":"ContainerStarted","Data":"325d86be4734c32eb35f222aa6acb78fa940a0b4ce31b799aca7ddc8c541d285"} Apr 16 18:25:22.294185 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:22.294173 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/kserve-controller-manager-7668d57578-rmx42" Apr 16 18:25:22.309580 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:22.309537 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/seaweedfs-86cc847c5c-mq58k" podStartSLOduration=0.861801051 podStartE2EDuration="4.309525045s" podCreationTimestamp="2026-04-16 18:25:18 +0000 UTC" firstStartedPulling="2026-04-16 18:25:18.468588983 +0000 UTC m=+549.274341762" lastFinishedPulling="2026-04-16 18:25:21.916312963 +0000 UTC m=+552.722065756" observedRunningTime="2026-04-16 18:25:22.308098957 +0000 UTC m=+553.113851755" watchObservedRunningTime="2026-04-16 18:25:22.309525045 +0000 UTC m=+553.115277846" Apr 16 18:25:22.322381 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:22.322326 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/kserve-controller-manager-7668d57578-rmx42" podStartSLOduration=1.849500254 podStartE2EDuration="5.32231342s" podCreationTimestamp="2026-04-16 18:25:17 +0000 UTC" firstStartedPulling="2026-04-16 18:25:18.398955278 +0000 UTC m=+549.204708058" lastFinishedPulling="2026-04-16 18:25:21.871768444 +0000 UTC m=+552.677521224" observedRunningTime="2026-04-16 18:25:22.321600871 +0000 UTC m=+553.127353672" watchObservedRunningTime="2026-04-16 18:25:22.32231342 +0000 UTC m=+553.128066221" Apr 16 18:25:28.299111 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:28.299080 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/seaweedfs-86cc847c5c-mq58k" Apr 16 18:25:53.302268 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:53.302207 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/kserve-controller-manager-7668d57578-rmx42" Apr 16 18:25:53.960267 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:53.960210 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve/kserve-controller-manager-7668d57578-rmx42"] Apr 16 18:25:53.960504 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:53.960454 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve/kserve-controller-manager-7668d57578-rmx42" podUID="db7b28a3-cfdd-487d-afa4-9b509061bf47" containerName="manager" containerID="cri-o://325d86be4734c32eb35f222aa6acb78fa940a0b4ce31b799aca7ddc8c541d285" gracePeriod=10 Apr 16 18:25:53.989220 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:53.989195 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/kserve-controller-manager-7668d57578-zj268"] Apr 16 18:25:53.991124 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:53.991108 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-7668d57578-zj268" Apr 16 18:25:54.004559 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:54.004537 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-7668d57578-zj268"] Apr 16 18:25:54.101327 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:54.101292 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/67ac5895-1883-4098-95c0-f84adec6a489-cert\") pod \"kserve-controller-manager-7668d57578-zj268\" (UID: \"67ac5895-1883-4098-95c0-f84adec6a489\") " pod="kserve/kserve-controller-manager-7668d57578-zj268" Apr 16 18:25:54.101482 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:54.101376 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckph4\" (UniqueName: \"kubernetes.io/projected/67ac5895-1883-4098-95c0-f84adec6a489-kube-api-access-ckph4\") pod \"kserve-controller-manager-7668d57578-zj268\" (UID: \"67ac5895-1883-4098-95c0-f84adec6a489\") " pod="kserve/kserve-controller-manager-7668d57578-zj268" Apr 16 18:25:54.199278 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:54.199257 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-7668d57578-rmx42" Apr 16 18:25:54.202510 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:54.202488 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ckph4\" (UniqueName: \"kubernetes.io/projected/67ac5895-1883-4098-95c0-f84adec6a489-kube-api-access-ckph4\") pod \"kserve-controller-manager-7668d57578-zj268\" (UID: \"67ac5895-1883-4098-95c0-f84adec6a489\") " pod="kserve/kserve-controller-manager-7668d57578-zj268" Apr 16 18:25:54.202585 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:54.202529 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/67ac5895-1883-4098-95c0-f84adec6a489-cert\") pod \"kserve-controller-manager-7668d57578-zj268\" (UID: \"67ac5895-1883-4098-95c0-f84adec6a489\") " pod="kserve/kserve-controller-manager-7668d57578-zj268" Apr 16 18:25:54.204760 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:54.204736 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/67ac5895-1883-4098-95c0-f84adec6a489-cert\") pod \"kserve-controller-manager-7668d57578-zj268\" (UID: \"67ac5895-1883-4098-95c0-f84adec6a489\") " pod="kserve/kserve-controller-manager-7668d57578-zj268" Apr 16 18:25:54.217460 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:54.217400 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckph4\" (UniqueName: \"kubernetes.io/projected/67ac5895-1883-4098-95c0-f84adec6a489-kube-api-access-ckph4\") pod \"kserve-controller-manager-7668d57578-zj268\" (UID: \"67ac5895-1883-4098-95c0-f84adec6a489\") " pod="kserve/kserve-controller-manager-7668d57578-zj268" Apr 16 18:25:54.303672 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:54.303640 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfp9f\" (UniqueName: \"kubernetes.io/projected/db7b28a3-cfdd-487d-afa4-9b509061bf47-kube-api-access-rfp9f\") pod \"db7b28a3-cfdd-487d-afa4-9b509061bf47\" (UID: \"db7b28a3-cfdd-487d-afa4-9b509061bf47\") " Apr 16 18:25:54.304031 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:54.303740 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/db7b28a3-cfdd-487d-afa4-9b509061bf47-cert\") pod \"db7b28a3-cfdd-487d-afa4-9b509061bf47\" (UID: \"db7b28a3-cfdd-487d-afa4-9b509061bf47\") " Apr 16 18:25:54.305775 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:54.305754 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db7b28a3-cfdd-487d-afa4-9b509061bf47-cert" (OuterVolumeSpecName: "cert") pod "db7b28a3-cfdd-487d-afa4-9b509061bf47" (UID: "db7b28a3-cfdd-487d-afa4-9b509061bf47"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 18:25:54.305835 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:54.305804 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db7b28a3-cfdd-487d-afa4-9b509061bf47-kube-api-access-rfp9f" (OuterVolumeSpecName: "kube-api-access-rfp9f") pod "db7b28a3-cfdd-487d-afa4-9b509061bf47" (UID: "db7b28a3-cfdd-487d-afa4-9b509061bf47"). InnerVolumeSpecName "kube-api-access-rfp9f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 18:25:54.335986 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:54.335956 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-7668d57578-zj268" Apr 16 18:25:54.381014 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:54.380946 2570 generic.go:358] "Generic (PLEG): container finished" podID="db7b28a3-cfdd-487d-afa4-9b509061bf47" containerID="325d86be4734c32eb35f222aa6acb78fa940a0b4ce31b799aca7ddc8c541d285" exitCode=0 Apr 16 18:25:54.381170 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:54.381029 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-7668d57578-rmx42" event={"ID":"db7b28a3-cfdd-487d-afa4-9b509061bf47","Type":"ContainerDied","Data":"325d86be4734c32eb35f222aa6acb78fa940a0b4ce31b799aca7ddc8c541d285"} Apr 16 18:25:54.381170 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:54.381064 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-7668d57578-rmx42" event={"ID":"db7b28a3-cfdd-487d-afa4-9b509061bf47","Type":"ContainerDied","Data":"68312c9b1f42b746f98dfec55e92f220ec8e0f5605e7ef3962c695e9c47b2c97"} Apr 16 18:25:54.381170 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:54.381083 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-7668d57578-rmx42" Apr 16 18:25:54.381382 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:54.381095 2570 scope.go:117] "RemoveContainer" containerID="325d86be4734c32eb35f222aa6acb78fa940a0b4ce31b799aca7ddc8c541d285" Apr 16 18:25:54.395602 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:54.395577 2570 scope.go:117] "RemoveContainer" containerID="325d86be4734c32eb35f222aa6acb78fa940a0b4ce31b799aca7ddc8c541d285" Apr 16 18:25:54.395925 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:25:54.395887 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"325d86be4734c32eb35f222aa6acb78fa940a0b4ce31b799aca7ddc8c541d285\": container with ID starting with 325d86be4734c32eb35f222aa6acb78fa940a0b4ce31b799aca7ddc8c541d285 not found: ID does not exist" containerID="325d86be4734c32eb35f222aa6acb78fa940a0b4ce31b799aca7ddc8c541d285" Apr 16 18:25:54.396000 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:54.395919 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"325d86be4734c32eb35f222aa6acb78fa940a0b4ce31b799aca7ddc8c541d285"} err="failed to get container status \"325d86be4734c32eb35f222aa6acb78fa940a0b4ce31b799aca7ddc8c541d285\": rpc error: code = NotFound desc = could not find container \"325d86be4734c32eb35f222aa6acb78fa940a0b4ce31b799aca7ddc8c541d285\": container with ID starting with 325d86be4734c32eb35f222aa6acb78fa940a0b4ce31b799aca7ddc8c541d285 not found: ID does not exist" Apr 16 18:25:54.404288 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:54.404266 2570 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/db7b28a3-cfdd-487d-afa4-9b509061bf47-cert\") on node \"ip-10-0-141-219.ec2.internal\" DevicePath \"\"" Apr 16 18:25:54.404288 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:54.404290 2570 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rfp9f\" (UniqueName: \"kubernetes.io/projected/db7b28a3-cfdd-487d-afa4-9b509061bf47-kube-api-access-rfp9f\") on node \"ip-10-0-141-219.ec2.internal\" DevicePath \"\"" Apr 16 18:25:54.406476 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:54.406450 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve/kserve-controller-manager-7668d57578-rmx42"] Apr 16 18:25:54.409831 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:54.409811 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve/kserve-controller-manager-7668d57578-rmx42"] Apr 16 18:25:54.453563 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:54.453533 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-7668d57578-zj268"] Apr 16 18:25:54.456592 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:25:54.456565 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67ac5895_1883_4098_95c0_f84adec6a489.slice/crio-0cf88b8bf0ca93ab26b0498de7a19f68d4e59841ba983bbf41d750013e901f8a WatchSource:0}: Error finding container 0cf88b8bf0ca93ab26b0498de7a19f68d4e59841ba983bbf41d750013e901f8a: Status 404 returned error can't find the container with id 0cf88b8bf0ca93ab26b0498de7a19f68d4e59841ba983bbf41d750013e901f8a Apr 16 18:25:55.385768 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:55.385735 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-7668d57578-zj268" event={"ID":"67ac5895-1883-4098-95c0-f84adec6a489","Type":"ContainerStarted","Data":"ff179a27a19986ed634c2c753874d51bb7feb4147a4366b89a4677fe6293348a"} Apr 16 18:25:55.385768 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:55.385772 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-7668d57578-zj268" event={"ID":"67ac5895-1883-4098-95c0-f84adec6a489","Type":"ContainerStarted","Data":"0cf88b8bf0ca93ab26b0498de7a19f68d4e59841ba983bbf41d750013e901f8a"} Apr 16 18:25:55.386192 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:55.385872 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/kserve-controller-manager-7668d57578-zj268" Apr 16 18:25:55.781066 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:25:55.780987 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db7b28a3-cfdd-487d-afa4-9b509061bf47" path="/var/lib/kubelet/pods/db7b28a3-cfdd-487d-afa4-9b509061bf47/volumes" Apr 16 18:26:09.684512 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:26:09.684487 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hls95_9eda7e8d-1d99-41d3-acfb-b6c80829811c/ovn-acl-logging/0.log" Apr 16 18:26:09.684937 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:26:09.684658 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hls95_9eda7e8d-1d99-41d3-acfb-b6c80829811c/ovn-acl-logging/0.log" Apr 16 18:26:26.394338 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:26:26.394306 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/kserve-controller-manager-7668d57578-zj268" Apr 16 18:26:26.415345 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:26:26.415292 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/kserve-controller-manager-7668d57578-zj268" podStartSLOduration=32.883537733 podStartE2EDuration="33.415276926s" podCreationTimestamp="2026-04-16 18:25:53 +0000 UTC" firstStartedPulling="2026-04-16 18:25:54.457842475 +0000 UTC m=+585.263595257" lastFinishedPulling="2026-04-16 18:25:54.989581668 +0000 UTC m=+585.795334450" observedRunningTime="2026-04-16 18:25:55.402833272 +0000 UTC m=+586.208586073" watchObservedRunningTime="2026-04-16 18:26:26.415276926 +0000 UTC m=+617.221029727" Apr 16 18:27:02.108429 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:27:02.106032 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x"] Apr 16 18:27:02.108429 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:27:02.106621 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="db7b28a3-cfdd-487d-afa4-9b509061bf47" containerName="manager" Apr 16 18:27:02.108429 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:27:02.106640 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="db7b28a3-cfdd-487d-afa4-9b509061bf47" containerName="manager" Apr 16 18:27:02.108429 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:27:02.106768 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="db7b28a3-cfdd-487d-afa4-9b509061bf47" containerName="manager" Apr 16 18:27:02.110398 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:27:02.110377 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" Apr 16 18:27:02.112768 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:27:02.112747 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"default-dockercfg-842jm\"" Apr 16 18:27:02.118305 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:27:02.118283 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x"] Apr 16 18:27:02.148698 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:27:02.148671 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/a98754ec-c8c7-4938-8f2c-6800bd994162-kserve-provision-location\") pod \"isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x\" (UID: \"a98754ec-c8c7-4938-8f2c-6800bd994162\") " pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" Apr 16 18:27:02.249642 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:27:02.249611 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/a98754ec-c8c7-4938-8f2c-6800bd994162-kserve-provision-location\") pod \"isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x\" (UID: \"a98754ec-c8c7-4938-8f2c-6800bd994162\") " pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" Apr 16 18:27:02.249976 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:27:02.249957 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/a98754ec-c8c7-4938-8f2c-6800bd994162-kserve-provision-location\") pod \"isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x\" (UID: \"a98754ec-c8c7-4938-8f2c-6800bd994162\") " pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" Apr 16 18:27:02.420742 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:27:02.420710 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" Apr 16 18:27:02.541566 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:27:02.541532 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x"] Apr 16 18:27:02.544702 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:27:02.544676 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda98754ec_c8c7_4938_8f2c_6800bd994162.slice/crio-c693309668bd094bc5b9deff9e9497ff58489c844743d6fee119f93d1c82705a WatchSource:0}: Error finding container c693309668bd094bc5b9deff9e9497ff58489c844743d6fee119f93d1c82705a: Status 404 returned error can't find the container with id c693309668bd094bc5b9deff9e9497ff58489c844743d6fee119f93d1c82705a Apr 16 18:27:02.570525 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:27:02.570493 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" event={"ID":"a98754ec-c8c7-4938-8f2c-6800bd994162","Type":"ContainerStarted","Data":"c693309668bd094bc5b9deff9e9497ff58489c844743d6fee119f93d1c82705a"} Apr 16 18:27:06.583823 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:27:06.583783 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" event={"ID":"a98754ec-c8c7-4938-8f2c-6800bd994162","Type":"ContainerStarted","Data":"2a89545cb3b782b6fa2dcd72d5505d7470255981b786faf0d32585bdbc0ad376"} Apr 16 18:27:09.593141 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:27:09.593052 2570 generic.go:358] "Generic (PLEG): container finished" podID="a98754ec-c8c7-4938-8f2c-6800bd994162" containerID="2a89545cb3b782b6fa2dcd72d5505d7470255981b786faf0d32585bdbc0ad376" exitCode=0 Apr 16 18:27:09.593141 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:27:09.593124 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" event={"ID":"a98754ec-c8c7-4938-8f2c-6800bd994162","Type":"ContainerDied","Data":"2a89545cb3b782b6fa2dcd72d5505d7470255981b786faf0d32585bdbc0ad376"} Apr 16 18:27:22.635920 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:27:22.635884 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" event={"ID":"a98754ec-c8c7-4938-8f2c-6800bd994162","Type":"ContainerStarted","Data":"eeebd4784d61901ffb49cc4f11e44e7e6320972434864996a72213dde2b5a3d7"} Apr 16 18:27:25.647618 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:27:25.647584 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" event={"ID":"a98754ec-c8c7-4938-8f2c-6800bd994162","Type":"ContainerStarted","Data":"0a435616a648d0597e47f493ac8d0239bf3df6f32f1002444b425b62eaa7b463"} Apr 16 18:27:25.648006 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:27:25.647835 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" Apr 16 18:27:25.648006 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:27:25.647865 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" Apr 16 18:27:25.649221 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:27:25.649192 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" podUID="a98754ec-c8c7-4938-8f2c-6800bd994162" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.17:8080: connect: connection refused" Apr 16 18:27:25.649845 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:27:25.649824 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" podUID="a98754ec-c8c7-4938-8f2c-6800bd994162" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:27:25.667141 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:27:25.667098 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" podStartSLOduration=1.224611743 podStartE2EDuration="23.667086135s" podCreationTimestamp="2026-04-16 18:27:02 +0000 UTC" firstStartedPulling="2026-04-16 18:27:02.546663537 +0000 UTC m=+653.352416318" lastFinishedPulling="2026-04-16 18:27:24.989137923 +0000 UTC m=+675.794890710" observedRunningTime="2026-04-16 18:27:25.666049302 +0000 UTC m=+676.471802130" watchObservedRunningTime="2026-04-16 18:27:25.667086135 +0000 UTC m=+676.472838936" Apr 16 18:27:26.650807 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:27:26.650762 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" podUID="a98754ec-c8c7-4938-8f2c-6800bd994162" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.17:8080: connect: connection refused" Apr 16 18:27:26.651206 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:27:26.651138 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" podUID="a98754ec-c8c7-4938-8f2c-6800bd994162" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:27:36.651020 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:27:36.650969 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" podUID="a98754ec-c8c7-4938-8f2c-6800bd994162" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.17:8080: connect: connection refused" Apr 16 18:27:36.651485 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:27:36.651434 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" podUID="a98754ec-c8c7-4938-8f2c-6800bd994162" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:27:46.651747 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:27:46.651689 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" podUID="a98754ec-c8c7-4938-8f2c-6800bd994162" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.17:8080: connect: connection refused" Apr 16 18:27:46.652251 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:27:46.652125 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" podUID="a98754ec-c8c7-4938-8f2c-6800bd994162" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:27:56.650859 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:27:56.650809 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" podUID="a98754ec-c8c7-4938-8f2c-6800bd994162" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.17:8080: connect: connection refused" Apr 16 18:27:56.651342 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:27:56.651215 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" podUID="a98754ec-c8c7-4938-8f2c-6800bd994162" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:28:06.651119 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:06.651019 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" podUID="a98754ec-c8c7-4938-8f2c-6800bd994162" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.17:8080: connect: connection refused" Apr 16 18:28:06.651603 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:06.651556 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" podUID="a98754ec-c8c7-4938-8f2c-6800bd994162" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:28:16.650891 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:16.650838 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" podUID="a98754ec-c8c7-4938-8f2c-6800bd994162" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.17:8080: connect: connection refused" Apr 16 18:28:16.651392 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:16.651299 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" podUID="a98754ec-c8c7-4938-8f2c-6800bd994162" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:28:26.651362 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:26.651313 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" podUID="a98754ec-c8c7-4938-8f2c-6800bd994162" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.17:8080: connect: connection refused" Apr 16 18:28:26.651815 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:26.651693 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" podUID="a98754ec-c8c7-4938-8f2c-6800bd994162" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:28:36.651444 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:36.651409 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" Apr 16 18:28:36.651842 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:36.651582 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" Apr 16 18:28:47.321328 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:47.321290 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x"] Apr 16 18:28:47.321702 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:47.321660 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" podUID="a98754ec-c8c7-4938-8f2c-6800bd994162" containerName="kserve-container" containerID="cri-o://eeebd4784d61901ffb49cc4f11e44e7e6320972434864996a72213dde2b5a3d7" gracePeriod=30 Apr 16 18:28:47.321782 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:47.321736 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" podUID="a98754ec-c8c7-4938-8f2c-6800bd994162" containerName="agent" containerID="cri-o://0a435616a648d0597e47f493ac8d0239bf3df6f32f1002444b425b62eaa7b463" gracePeriod=30 Apr 16 18:28:47.400016 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:47.399981 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-e9c91-predictor-547f94d7c8-6pj9n"] Apr 16 18:28:47.403398 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:47.403373 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-e9c91-predictor-547f94d7c8-6pj9n" Apr 16 18:28:47.411324 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:47.411298 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-e9c91-predictor-547f94d7c8-6pj9n"] Apr 16 18:28:47.445831 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:47.445805 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-e9c91-predictor-5d78db9f87-v4sjw"] Apr 16 18:28:47.448909 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:47.448887 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-e9c91-predictor-5d78db9f87-v4sjw" Apr 16 18:28:47.455775 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:47.455753 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-e9c91-predictor-5d78db9f87-v4sjw"] Apr 16 18:28:47.504370 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:47.504343 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/db513adf-fc55-4a7d-994d-65a2d17805b8-kserve-provision-location\") pod \"isvc-xgboost-graph-raw-e9c91-predictor-5d78db9f87-v4sjw\" (UID: \"db513adf-fc55-4a7d-994d-65a2d17805b8\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-e9c91-predictor-5d78db9f87-v4sjw" Apr 16 18:28:47.504524 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:47.504394 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/87e2bd64-8f94-4af3-9811-3aaa021efaf9-kserve-provision-location\") pod \"isvc-sklearn-graph-raw-e9c91-predictor-547f94d7c8-6pj9n\" (UID: \"87e2bd64-8f94-4af3-9811-3aaa021efaf9\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-e9c91-predictor-547f94d7c8-6pj9n" Apr 16 18:28:47.607730 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:47.607485 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/87e2bd64-8f94-4af3-9811-3aaa021efaf9-kserve-provision-location\") pod \"isvc-sklearn-graph-raw-e9c91-predictor-547f94d7c8-6pj9n\" (UID: \"87e2bd64-8f94-4af3-9811-3aaa021efaf9\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-e9c91-predictor-547f94d7c8-6pj9n" Apr 16 18:28:47.607888 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:47.607751 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/db513adf-fc55-4a7d-994d-65a2d17805b8-kserve-provision-location\") pod \"isvc-xgboost-graph-raw-e9c91-predictor-5d78db9f87-v4sjw\" (UID: \"db513adf-fc55-4a7d-994d-65a2d17805b8\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-e9c91-predictor-5d78db9f87-v4sjw" Apr 16 18:28:47.607973 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:47.607952 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/87e2bd64-8f94-4af3-9811-3aaa021efaf9-kserve-provision-location\") pod \"isvc-sklearn-graph-raw-e9c91-predictor-547f94d7c8-6pj9n\" (UID: \"87e2bd64-8f94-4af3-9811-3aaa021efaf9\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-e9c91-predictor-547f94d7c8-6pj9n" Apr 16 18:28:47.608164 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:47.608137 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/db513adf-fc55-4a7d-994d-65a2d17805b8-kserve-provision-location\") pod \"isvc-xgboost-graph-raw-e9c91-predictor-5d78db9f87-v4sjw\" (UID: \"db513adf-fc55-4a7d-994d-65a2d17805b8\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-e9c91-predictor-5d78db9f87-v4sjw" Apr 16 18:28:47.713598 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:47.713554 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-e9c91-predictor-547f94d7c8-6pj9n" Apr 16 18:28:47.758533 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:47.758502 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-e9c91-predictor-5d78db9f87-v4sjw" Apr 16 18:28:47.846443 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:47.846405 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-e9c91-predictor-547f94d7c8-6pj9n"] Apr 16 18:28:47.849625 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:28:47.849593 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87e2bd64_8f94_4af3_9811_3aaa021efaf9.slice/crio-ff999b19848dc73183c7695f6c9ed75c11c8f3ef75f783499563e22a7b5fc40e WatchSource:0}: Error finding container ff999b19848dc73183c7695f6c9ed75c11c8f3ef75f783499563e22a7b5fc40e: Status 404 returned error can't find the container with id ff999b19848dc73183c7695f6c9ed75c11c8f3ef75f783499563e22a7b5fc40e Apr 16 18:28:47.860941 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:47.860902 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-e9c91-predictor-547f94d7c8-6pj9n" event={"ID":"87e2bd64-8f94-4af3-9811-3aaa021efaf9","Type":"ContainerStarted","Data":"ff999b19848dc73183c7695f6c9ed75c11c8f3ef75f783499563e22a7b5fc40e"} Apr 16 18:28:47.893109 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:47.893085 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-e9c91-predictor-5d78db9f87-v4sjw"] Apr 16 18:28:47.895769 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:28:47.895745 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb513adf_fc55_4a7d_994d_65a2d17805b8.slice/crio-60880e6a16f1e5f0e218830341caaf2e66c3f26e6e72fd1ed584297a471c57df WatchSource:0}: Error finding container 60880e6a16f1e5f0e218830341caaf2e66c3f26e6e72fd1ed584297a471c57df: Status 404 returned error can't find the container with id 60880e6a16f1e5f0e218830341caaf2e66c3f26e6e72fd1ed584297a471c57df Apr 16 18:28:48.865685 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:48.865650 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-e9c91-predictor-5d78db9f87-v4sjw" event={"ID":"db513adf-fc55-4a7d-994d-65a2d17805b8","Type":"ContainerStarted","Data":"5983729725ef376d413c95826e750cea9347c4ceae1d57fa2565fba6a9ddf39f"} Apr 16 18:28:48.866142 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:48.865694 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-e9c91-predictor-5d78db9f87-v4sjw" event={"ID":"db513adf-fc55-4a7d-994d-65a2d17805b8","Type":"ContainerStarted","Data":"60880e6a16f1e5f0e218830341caaf2e66c3f26e6e72fd1ed584297a471c57df"} Apr 16 18:28:48.866998 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:48.866976 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-e9c91-predictor-547f94d7c8-6pj9n" event={"ID":"87e2bd64-8f94-4af3-9811-3aaa021efaf9","Type":"ContainerStarted","Data":"bf929ef1ed056b7f9aff9e16a6f1feab793cdd21e7e510530b7102a9a8aac2a5"} Apr 16 18:28:51.876885 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:51.876853 2570 generic.go:358] "Generic (PLEG): container finished" podID="a98754ec-c8c7-4938-8f2c-6800bd994162" containerID="eeebd4784d61901ffb49cc4f11e44e7e6320972434864996a72213dde2b5a3d7" exitCode=0 Apr 16 18:28:51.877374 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:51.876923 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" event={"ID":"a98754ec-c8c7-4938-8f2c-6800bd994162","Type":"ContainerDied","Data":"eeebd4784d61901ffb49cc4f11e44e7e6320972434864996a72213dde2b5a3d7"} Apr 16 18:28:51.878155 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:51.878136 2570 generic.go:358] "Generic (PLEG): container finished" podID="87e2bd64-8f94-4af3-9811-3aaa021efaf9" containerID="bf929ef1ed056b7f9aff9e16a6f1feab793cdd21e7e510530b7102a9a8aac2a5" exitCode=0 Apr 16 18:28:51.878290 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:51.878209 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-e9c91-predictor-547f94d7c8-6pj9n" event={"ID":"87e2bd64-8f94-4af3-9811-3aaa021efaf9","Type":"ContainerDied","Data":"bf929ef1ed056b7f9aff9e16a6f1feab793cdd21e7e510530b7102a9a8aac2a5"} Apr 16 18:28:51.879443 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:51.879421 2570 generic.go:358] "Generic (PLEG): container finished" podID="db513adf-fc55-4a7d-994d-65a2d17805b8" containerID="5983729725ef376d413c95826e750cea9347c4ceae1d57fa2565fba6a9ddf39f" exitCode=0 Apr 16 18:28:51.879529 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:51.879471 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-e9c91-predictor-5d78db9f87-v4sjw" event={"ID":"db513adf-fc55-4a7d-994d-65a2d17805b8","Type":"ContainerDied","Data":"5983729725ef376d413c95826e750cea9347c4ceae1d57fa2565fba6a9ddf39f"} Apr 16 18:28:52.885115 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:52.885070 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-e9c91-predictor-547f94d7c8-6pj9n" event={"ID":"87e2bd64-8f94-4af3-9811-3aaa021efaf9","Type":"ContainerStarted","Data":"6c99a334ba02a913bf3db2894a3d0f451c669b686b0e85f1be4c32036b7aa8db"} Apr 16 18:28:52.885600 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:52.885538 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-e9c91-predictor-547f94d7c8-6pj9n" Apr 16 18:28:52.886908 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:52.886875 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-e9c91-predictor-547f94d7c8-6pj9n" podUID="87e2bd64-8f94-4af3-9811-3aaa021efaf9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.18:8080: connect: connection refused" Apr 16 18:28:52.902626 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:52.902561 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-e9c91-predictor-547f94d7c8-6pj9n" podStartSLOduration=5.902548665 podStartE2EDuration="5.902548665s" podCreationTimestamp="2026-04-16 18:28:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:28:52.901430411 +0000 UTC m=+763.707183210" watchObservedRunningTime="2026-04-16 18:28:52.902548665 +0000 UTC m=+763.708301465" Apr 16 18:28:53.889524 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:53.889488 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-e9c91-predictor-547f94d7c8-6pj9n" podUID="87e2bd64-8f94-4af3-9811-3aaa021efaf9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.18:8080: connect: connection refused" Apr 16 18:28:56.650729 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:56.650669 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" podUID="a98754ec-c8c7-4938-8f2c-6800bd994162" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.17:8080: connect: connection refused" Apr 16 18:28:56.651176 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:28:56.651017 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" podUID="a98754ec-c8c7-4938-8f2c-6800bd994162" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:29:03.889911 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:03.889862 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-e9c91-predictor-547f94d7c8-6pj9n" podUID="87e2bd64-8f94-4af3-9811-3aaa021efaf9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.18:8080: connect: connection refused" Apr 16 18:29:06.650975 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:06.650928 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" podUID="a98754ec-c8c7-4938-8f2c-6800bd994162" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.17:8080: connect: connection refused" Apr 16 18:29:06.651453 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:06.651324 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" podUID="a98754ec-c8c7-4938-8f2c-6800bd994162" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:29:08.937508 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:08.937468 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-e9c91-predictor-5d78db9f87-v4sjw" event={"ID":"db513adf-fc55-4a7d-994d-65a2d17805b8","Type":"ContainerStarted","Data":"f6c0392a8de49441f9df2e3a01253ce4ae6895884097dcf5895ed8b49fb738b6"} Apr 16 18:29:08.937921 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:08.937748 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-e9c91-predictor-5d78db9f87-v4sjw" Apr 16 18:29:08.938951 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:08.938922 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-e9c91-predictor-5d78db9f87-v4sjw" podUID="db513adf-fc55-4a7d-994d-65a2d17805b8" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.19:8080: connect: connection refused" Apr 16 18:29:08.954422 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:08.954385 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-e9c91-predictor-5d78db9f87-v4sjw" podStartSLOduration=5.691263609 podStartE2EDuration="21.954374356s" podCreationTimestamp="2026-04-16 18:28:47 +0000 UTC" firstStartedPulling="2026-04-16 18:28:51.880569172 +0000 UTC m=+762.686321950" lastFinishedPulling="2026-04-16 18:29:08.143679915 +0000 UTC m=+778.949432697" observedRunningTime="2026-04-16 18:29:08.952847631 +0000 UTC m=+779.758600426" watchObservedRunningTime="2026-04-16 18:29:08.954374356 +0000 UTC m=+779.760127157" Apr 16 18:29:09.940838 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:09.940800 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-e9c91-predictor-5d78db9f87-v4sjw" podUID="db513adf-fc55-4a7d-994d-65a2d17805b8" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.19:8080: connect: connection refused" Apr 16 18:29:13.889912 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:13.889867 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-e9c91-predictor-547f94d7c8-6pj9n" podUID="87e2bd64-8f94-4af3-9811-3aaa021efaf9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.18:8080: connect: connection refused" Apr 16 18:29:16.651722 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:16.651672 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" podUID="a98754ec-c8c7-4938-8f2c-6800bd994162" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.17:8080: connect: connection refused" Apr 16 18:29:16.652205 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:16.651809 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" Apr 16 18:29:16.652205 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:16.652002 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" podUID="a98754ec-c8c7-4938-8f2c-6800bd994162" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:29:16.652205 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:16.652095 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" Apr 16 18:29:17.507121 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:17.507098 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" Apr 16 18:29:17.648794 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:17.648755 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/a98754ec-c8c7-4938-8f2c-6800bd994162-kserve-provision-location\") pod \"a98754ec-c8c7-4938-8f2c-6800bd994162\" (UID: \"a98754ec-c8c7-4938-8f2c-6800bd994162\") " Apr 16 18:29:17.649094 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:17.649068 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a98754ec-c8c7-4938-8f2c-6800bd994162-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "a98754ec-c8c7-4938-8f2c-6800bd994162" (UID: "a98754ec-c8c7-4938-8f2c-6800bd994162"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 18:29:17.749904 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:17.749867 2570 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/a98754ec-c8c7-4938-8f2c-6800bd994162-kserve-provision-location\") on node \"ip-10-0-141-219.ec2.internal\" DevicePath \"\"" Apr 16 18:29:17.969780 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:17.969685 2570 generic.go:358] "Generic (PLEG): container finished" podID="a98754ec-c8c7-4938-8f2c-6800bd994162" containerID="0a435616a648d0597e47f493ac8d0239bf3df6f32f1002444b425b62eaa7b463" exitCode=0 Apr 16 18:29:17.969780 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:17.969764 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" Apr 16 18:29:17.969780 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:17.969766 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" event={"ID":"a98754ec-c8c7-4938-8f2c-6800bd994162","Type":"ContainerDied","Data":"0a435616a648d0597e47f493ac8d0239bf3df6f32f1002444b425b62eaa7b463"} Apr 16 18:29:17.970037 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:17.969813 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x" event={"ID":"a98754ec-c8c7-4938-8f2c-6800bd994162","Type":"ContainerDied","Data":"c693309668bd094bc5b9deff9e9497ff58489c844743d6fee119f93d1c82705a"} Apr 16 18:29:17.970037 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:17.969830 2570 scope.go:117] "RemoveContainer" containerID="0a435616a648d0597e47f493ac8d0239bf3df6f32f1002444b425b62eaa7b463" Apr 16 18:29:17.977626 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:17.977608 2570 scope.go:117] "RemoveContainer" containerID="eeebd4784d61901ffb49cc4f11e44e7e6320972434864996a72213dde2b5a3d7" Apr 16 18:29:17.984903 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:17.984885 2570 scope.go:117] "RemoveContainer" containerID="2a89545cb3b782b6fa2dcd72d5505d7470255981b786faf0d32585bdbc0ad376" Apr 16 18:29:17.986775 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:17.986753 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x"] Apr 16 18:29:17.991116 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:17.991091 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-raw-sklearn-batcher-65c21-predictor-574ddc679b-jtj2x"] Apr 16 18:29:17.993018 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:17.993004 2570 scope.go:117] "RemoveContainer" containerID="0a435616a648d0597e47f493ac8d0239bf3df6f32f1002444b425b62eaa7b463" Apr 16 18:29:17.993523 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:29:17.993499 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a435616a648d0597e47f493ac8d0239bf3df6f32f1002444b425b62eaa7b463\": container with ID starting with 0a435616a648d0597e47f493ac8d0239bf3df6f32f1002444b425b62eaa7b463 not found: ID does not exist" containerID="0a435616a648d0597e47f493ac8d0239bf3df6f32f1002444b425b62eaa7b463" Apr 16 18:29:17.993615 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:17.993535 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a435616a648d0597e47f493ac8d0239bf3df6f32f1002444b425b62eaa7b463"} err="failed to get container status \"0a435616a648d0597e47f493ac8d0239bf3df6f32f1002444b425b62eaa7b463\": rpc error: code = NotFound desc = could not find container \"0a435616a648d0597e47f493ac8d0239bf3df6f32f1002444b425b62eaa7b463\": container with ID starting with 0a435616a648d0597e47f493ac8d0239bf3df6f32f1002444b425b62eaa7b463 not found: ID does not exist" Apr 16 18:29:17.993615 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:17.993566 2570 scope.go:117] "RemoveContainer" containerID="eeebd4784d61901ffb49cc4f11e44e7e6320972434864996a72213dde2b5a3d7" Apr 16 18:29:17.993829 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:29:17.993811 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eeebd4784d61901ffb49cc4f11e44e7e6320972434864996a72213dde2b5a3d7\": container with ID starting with eeebd4784d61901ffb49cc4f11e44e7e6320972434864996a72213dde2b5a3d7 not found: ID does not exist" containerID="eeebd4784d61901ffb49cc4f11e44e7e6320972434864996a72213dde2b5a3d7" Apr 16 18:29:17.993868 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:17.993838 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeebd4784d61901ffb49cc4f11e44e7e6320972434864996a72213dde2b5a3d7"} err="failed to get container status \"eeebd4784d61901ffb49cc4f11e44e7e6320972434864996a72213dde2b5a3d7\": rpc error: code = NotFound desc = could not find container \"eeebd4784d61901ffb49cc4f11e44e7e6320972434864996a72213dde2b5a3d7\": container with ID starting with eeebd4784d61901ffb49cc4f11e44e7e6320972434864996a72213dde2b5a3d7 not found: ID does not exist" Apr 16 18:29:17.993868 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:17.993854 2570 scope.go:117] "RemoveContainer" containerID="2a89545cb3b782b6fa2dcd72d5505d7470255981b786faf0d32585bdbc0ad376" Apr 16 18:29:17.994049 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:29:17.994033 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a89545cb3b782b6fa2dcd72d5505d7470255981b786faf0d32585bdbc0ad376\": container with ID starting with 2a89545cb3b782b6fa2dcd72d5505d7470255981b786faf0d32585bdbc0ad376 not found: ID does not exist" containerID="2a89545cb3b782b6fa2dcd72d5505d7470255981b786faf0d32585bdbc0ad376" Apr 16 18:29:17.994112 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:17.994049 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a89545cb3b782b6fa2dcd72d5505d7470255981b786faf0d32585bdbc0ad376"} err="failed to get container status \"2a89545cb3b782b6fa2dcd72d5505d7470255981b786faf0d32585bdbc0ad376\": rpc error: code = NotFound desc = could not find container \"2a89545cb3b782b6fa2dcd72d5505d7470255981b786faf0d32585bdbc0ad376\": container with ID starting with 2a89545cb3b782b6fa2dcd72d5505d7470255981b786faf0d32585bdbc0ad376 not found: ID does not exist" Apr 16 18:29:19.781495 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:19.781457 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a98754ec-c8c7-4938-8f2c-6800bd994162" path="/var/lib/kubelet/pods/a98754ec-c8c7-4938-8f2c-6800bd994162/volumes" Apr 16 18:29:19.941463 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:19.941415 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-e9c91-predictor-5d78db9f87-v4sjw" podUID="db513adf-fc55-4a7d-994d-65a2d17805b8" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.19:8080: connect: connection refused" Apr 16 18:29:23.890376 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:23.890331 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-e9c91-predictor-547f94d7c8-6pj9n" podUID="87e2bd64-8f94-4af3-9811-3aaa021efaf9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.18:8080: connect: connection refused" Apr 16 18:29:29.941581 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:29.941490 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-e9c91-predictor-5d78db9f87-v4sjw" podUID="db513adf-fc55-4a7d-994d-65a2d17805b8" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.19:8080: connect: connection refused" Apr 16 18:29:33.889568 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:33.889521 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-e9c91-predictor-547f94d7c8-6pj9n" podUID="87e2bd64-8f94-4af3-9811-3aaa021efaf9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.18:8080: connect: connection refused" Apr 16 18:29:39.941156 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:39.941115 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-e9c91-predictor-5d78db9f87-v4sjw" podUID="db513adf-fc55-4a7d-994d-65a2d17805b8" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.19:8080: connect: connection refused" Apr 16 18:29:43.889917 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:43.889868 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-e9c91-predictor-547f94d7c8-6pj9n" podUID="87e2bd64-8f94-4af3-9811-3aaa021efaf9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.18:8080: connect: connection refused" Apr 16 18:29:49.941480 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:49.941432 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-e9c91-predictor-5d78db9f87-v4sjw" podUID="db513adf-fc55-4a7d-994d-65a2d17805b8" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.19:8080: connect: connection refused" Apr 16 18:29:53.890146 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:53.890104 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-e9c91-predictor-547f94d7c8-6pj9n" podUID="87e2bd64-8f94-4af3-9811-3aaa021efaf9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.18:8080: connect: connection refused" Apr 16 18:29:55.777645 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:55.777594 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-e9c91-predictor-547f94d7c8-6pj9n" podUID="87e2bd64-8f94-4af3-9811-3aaa021efaf9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.18:8080: connect: connection refused" Apr 16 18:29:59.940990 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:29:59.940944 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-e9c91-predictor-5d78db9f87-v4sjw" podUID="db513adf-fc55-4a7d-994d-65a2d17805b8" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.19:8080: connect: connection refused" Apr 16 18:30:05.781532 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:05.781497 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-e9c91-predictor-547f94d7c8-6pj9n" Apr 16 18:30:09.942446 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:09.942411 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-e9c91-predictor-5d78db9f87-v4sjw" Apr 16 18:30:17.443005 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:17.442967 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-e9c91-d688d7fc9-tdzkf"] Apr 16 18:30:17.443380 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:17.443254 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a98754ec-c8c7-4938-8f2c-6800bd994162" containerName="kserve-container" Apr 16 18:30:17.443380 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:17.443266 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="a98754ec-c8c7-4938-8f2c-6800bd994162" containerName="kserve-container" Apr 16 18:30:17.443380 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:17.443281 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a98754ec-c8c7-4938-8f2c-6800bd994162" containerName="agent" Apr 16 18:30:17.443380 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:17.443287 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="a98754ec-c8c7-4938-8f2c-6800bd994162" containerName="agent" Apr 16 18:30:17.443380 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:17.443294 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a98754ec-c8c7-4938-8f2c-6800bd994162" containerName="storage-initializer" Apr 16 18:30:17.443380 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:17.443300 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="a98754ec-c8c7-4938-8f2c-6800bd994162" containerName="storage-initializer" Apr 16 18:30:17.443380 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:17.443349 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="a98754ec-c8c7-4938-8f2c-6800bd994162" containerName="kserve-container" Apr 16 18:30:17.443380 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:17.443358 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="a98754ec-c8c7-4938-8f2c-6800bd994162" containerName="agent" Apr 16 18:30:17.446421 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:17.446404 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-raw-e9c91-d688d7fc9-tdzkf" Apr 16 18:30:17.448944 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:17.448901 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"model-chainer-raw-e9c91-kube-rbac-proxy-sar-config\"" Apr 16 18:30:17.448944 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:17.448931 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"openshift-service-ca.crt\"" Apr 16 18:30:17.449258 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:17.449221 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"model-chainer-raw-e9c91-serving-cert\"" Apr 16 18:30:17.459388 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:17.459366 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-e9c91-d688d7fc9-tdzkf"] Apr 16 18:30:17.490416 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:17.490392 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd5911f6-35db-4fbb-b0c2-83687a948642-openshift-service-ca-bundle\") pod \"model-chainer-raw-e9c91-d688d7fc9-tdzkf\" (UID: \"fd5911f6-35db-4fbb-b0c2-83687a948642\") " pod="kserve-ci-e2e-test/model-chainer-raw-e9c91-d688d7fc9-tdzkf" Apr 16 18:30:17.490529 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:17.490454 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fd5911f6-35db-4fbb-b0c2-83687a948642-proxy-tls\") pod \"model-chainer-raw-e9c91-d688d7fc9-tdzkf\" (UID: \"fd5911f6-35db-4fbb-b0c2-83687a948642\") " pod="kserve-ci-e2e-test/model-chainer-raw-e9c91-d688d7fc9-tdzkf" Apr 16 18:30:17.590813 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:17.590784 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fd5911f6-35db-4fbb-b0c2-83687a948642-proxy-tls\") pod \"model-chainer-raw-e9c91-d688d7fc9-tdzkf\" (UID: \"fd5911f6-35db-4fbb-b0c2-83687a948642\") " pod="kserve-ci-e2e-test/model-chainer-raw-e9c91-d688d7fc9-tdzkf" Apr 16 18:30:17.590995 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:17.590828 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd5911f6-35db-4fbb-b0c2-83687a948642-openshift-service-ca-bundle\") pod \"model-chainer-raw-e9c91-d688d7fc9-tdzkf\" (UID: \"fd5911f6-35db-4fbb-b0c2-83687a948642\") " pod="kserve-ci-e2e-test/model-chainer-raw-e9c91-d688d7fc9-tdzkf" Apr 16 18:30:17.590995 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:30:17.590938 2570 secret.go:189] Couldn't get secret kserve-ci-e2e-test/model-chainer-raw-e9c91-serving-cert: secret "model-chainer-raw-e9c91-serving-cert" not found Apr 16 18:30:17.591111 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:30:17.591025 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd5911f6-35db-4fbb-b0c2-83687a948642-proxy-tls podName:fd5911f6-35db-4fbb-b0c2-83687a948642 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:18.091004642 +0000 UTC m=+848.896757435 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/fd5911f6-35db-4fbb-b0c2-83687a948642-proxy-tls") pod "model-chainer-raw-e9c91-d688d7fc9-tdzkf" (UID: "fd5911f6-35db-4fbb-b0c2-83687a948642") : secret "model-chainer-raw-e9c91-serving-cert" not found Apr 16 18:30:17.591513 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:17.591491 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd5911f6-35db-4fbb-b0c2-83687a948642-openshift-service-ca-bundle\") pod \"model-chainer-raw-e9c91-d688d7fc9-tdzkf\" (UID: \"fd5911f6-35db-4fbb-b0c2-83687a948642\") " pod="kserve-ci-e2e-test/model-chainer-raw-e9c91-d688d7fc9-tdzkf" Apr 16 18:30:18.095553 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:18.095517 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fd5911f6-35db-4fbb-b0c2-83687a948642-proxy-tls\") pod \"model-chainer-raw-e9c91-d688d7fc9-tdzkf\" (UID: \"fd5911f6-35db-4fbb-b0c2-83687a948642\") " pod="kserve-ci-e2e-test/model-chainer-raw-e9c91-d688d7fc9-tdzkf" Apr 16 18:30:18.097953 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:18.097931 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fd5911f6-35db-4fbb-b0c2-83687a948642-proxy-tls\") pod \"model-chainer-raw-e9c91-d688d7fc9-tdzkf\" (UID: \"fd5911f6-35db-4fbb-b0c2-83687a948642\") " pod="kserve-ci-e2e-test/model-chainer-raw-e9c91-d688d7fc9-tdzkf" Apr 16 18:30:18.358070 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:18.357970 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-raw-e9c91-d688d7fc9-tdzkf" Apr 16 18:30:18.477952 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:18.477807 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-e9c91-d688d7fc9-tdzkf"] Apr 16 18:30:18.480794 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:30:18.480753 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd5911f6_35db_4fbb_b0c2_83687a948642.slice/crio-57bd36f1bc23b01d952c8a03d6a5aab5089d97b6fd0409fed6ddd0953a12c1c2 WatchSource:0}: Error finding container 57bd36f1bc23b01d952c8a03d6a5aab5089d97b6fd0409fed6ddd0953a12c1c2: Status 404 returned error can't find the container with id 57bd36f1bc23b01d952c8a03d6a5aab5089d97b6fd0409fed6ddd0953a12c1c2 Apr 16 18:30:18.482636 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:18.482615 2570 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 18:30:19.134579 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:19.134546 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-raw-e9c91-d688d7fc9-tdzkf" event={"ID":"fd5911f6-35db-4fbb-b0c2-83687a948642","Type":"ContainerStarted","Data":"57bd36f1bc23b01d952c8a03d6a5aab5089d97b6fd0409fed6ddd0953a12c1c2"} Apr 16 18:30:21.141943 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:21.141903 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-raw-e9c91-d688d7fc9-tdzkf" event={"ID":"fd5911f6-35db-4fbb-b0c2-83687a948642","Type":"ContainerStarted","Data":"4622a19a2b36dfa1bc49df610456945ea402603c90f4365bf06da7e86d3e22e7"} Apr 16 18:30:21.142432 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:21.142055 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/model-chainer-raw-e9c91-d688d7fc9-tdzkf" Apr 16 18:30:21.159141 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:21.159088 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/model-chainer-raw-e9c91-d688d7fc9-tdzkf" podStartSLOduration=1.793971304 podStartE2EDuration="4.159072642s" podCreationTimestamp="2026-04-16 18:30:17 +0000 UTC" firstStartedPulling="2026-04-16 18:30:18.482793873 +0000 UTC m=+849.288546660" lastFinishedPulling="2026-04-16 18:30:20.847895219 +0000 UTC m=+851.653647998" observedRunningTime="2026-04-16 18:30:21.157983656 +0000 UTC m=+851.963736457" watchObservedRunningTime="2026-04-16 18:30:21.159072642 +0000 UTC m=+851.964825443" Apr 16 18:30:27.151666 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:27.151633 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/model-chainer-raw-e9c91-d688d7fc9-tdzkf" Apr 16 18:30:27.487040 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:27.486963 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-e9c91-d688d7fc9-tdzkf"] Apr 16 18:30:27.487219 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:27.487174 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/model-chainer-raw-e9c91-d688d7fc9-tdzkf" podUID="fd5911f6-35db-4fbb-b0c2-83687a948642" containerName="model-chainer-raw-e9c91" containerID="cri-o://4622a19a2b36dfa1bc49df610456945ea402603c90f4365bf06da7e86d3e22e7" gracePeriod=30 Apr 16 18:30:27.665249 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:27.665209 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-e9c91-predictor-547f94d7c8-6pj9n"] Apr 16 18:30:27.665517 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:27.665491 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-e9c91-predictor-547f94d7c8-6pj9n" podUID="87e2bd64-8f94-4af3-9811-3aaa021efaf9" containerName="kserve-container" containerID="cri-o://6c99a334ba02a913bf3db2894a3d0f451c669b686b0e85f1be4c32036b7aa8db" gracePeriod=30 Apr 16 18:30:27.723161 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:27.723129 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-32270-predictor-7c65ccdf94-kwc9z"] Apr 16 18:30:27.726538 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:27.726513 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-32270-predictor-7c65ccdf94-kwc9z" Apr 16 18:30:27.735439 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:27.735414 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-32270-predictor-7c65ccdf94-kwc9z"] Apr 16 18:30:27.782760 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:27.782688 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-32270-predictor-5ffbcf999-7bnlw"] Apr 16 18:30:27.786664 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:27.786640 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-32270-predictor-5ffbcf999-7bnlw" Apr 16 18:30:27.794895 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:27.794875 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-32270-predictor-5ffbcf999-7bnlw"] Apr 16 18:30:27.817562 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:27.817528 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-e9c91-predictor-5d78db9f87-v4sjw"] Apr 16 18:30:27.817830 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:27.817807 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-e9c91-predictor-5d78db9f87-v4sjw" podUID="db513adf-fc55-4a7d-994d-65a2d17805b8" containerName="kserve-container" containerID="cri-o://f6c0392a8de49441f9df2e3a01253ce4ae6895884097dcf5895ed8b49fb738b6" gracePeriod=30 Apr 16 18:30:27.866902 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:27.866867 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/4f7d7451-5b7d-4c20-9b30-787c22d2db07-kserve-provision-location\") pod \"isvc-sklearn-graph-raw-hpa-32270-predictor-7c65ccdf94-kwc9z\" (UID: \"4f7d7451-5b7d-4c20-9b30-787c22d2db07\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-32270-predictor-7c65ccdf94-kwc9z" Apr 16 18:30:27.867070 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:27.866918 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/abc49327-fc63-43bc-a0af-0d8f8faf577f-kserve-provision-location\") pod \"isvc-xgboost-graph-raw-hpa-32270-predictor-5ffbcf999-7bnlw\" (UID: \"abc49327-fc63-43bc-a0af-0d8f8faf577f\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-32270-predictor-5ffbcf999-7bnlw" Apr 16 18:30:27.967793 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:27.967750 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/4f7d7451-5b7d-4c20-9b30-787c22d2db07-kserve-provision-location\") pod \"isvc-sklearn-graph-raw-hpa-32270-predictor-7c65ccdf94-kwc9z\" (UID: \"4f7d7451-5b7d-4c20-9b30-787c22d2db07\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-32270-predictor-7c65ccdf94-kwc9z" Apr 16 18:30:27.967945 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:27.967818 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/abc49327-fc63-43bc-a0af-0d8f8faf577f-kserve-provision-location\") pod \"isvc-xgboost-graph-raw-hpa-32270-predictor-5ffbcf999-7bnlw\" (UID: \"abc49327-fc63-43bc-a0af-0d8f8faf577f\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-32270-predictor-5ffbcf999-7bnlw" Apr 16 18:30:27.968159 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:27.968139 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/abc49327-fc63-43bc-a0af-0d8f8faf577f-kserve-provision-location\") pod \"isvc-xgboost-graph-raw-hpa-32270-predictor-5ffbcf999-7bnlw\" (UID: \"abc49327-fc63-43bc-a0af-0d8f8faf577f\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-32270-predictor-5ffbcf999-7bnlw" Apr 16 18:30:27.968203 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:27.968154 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/4f7d7451-5b7d-4c20-9b30-787c22d2db07-kserve-provision-location\") pod \"isvc-sklearn-graph-raw-hpa-32270-predictor-7c65ccdf94-kwc9z\" (UID: \"4f7d7451-5b7d-4c20-9b30-787c22d2db07\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-32270-predictor-7c65ccdf94-kwc9z" Apr 16 18:30:28.037975 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:28.037881 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-32270-predictor-7c65ccdf94-kwc9z" Apr 16 18:30:28.098758 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:28.098720 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-32270-predictor-5ffbcf999-7bnlw" Apr 16 18:30:28.165353 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:28.165319 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-32270-predictor-7c65ccdf94-kwc9z"] Apr 16 18:30:28.171588 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:30:28.171561 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f7d7451_5b7d_4c20_9b30_787c22d2db07.slice/crio-af49b9f54bb8f1a29fe821388b81ee691283ae2de768c770c7834f730ac67805 WatchSource:0}: Error finding container af49b9f54bb8f1a29fe821388b81ee691283ae2de768c770c7834f730ac67805: Status 404 returned error can't find the container with id af49b9f54bb8f1a29fe821388b81ee691283ae2de768c770c7834f730ac67805 Apr 16 18:30:28.225926 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:28.225897 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-32270-predictor-5ffbcf999-7bnlw"] Apr 16 18:30:28.228310 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:30:28.228283 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabc49327_fc63_43bc_a0af_0d8f8faf577f.slice/crio-a066d64403d422c76c2fc86e42028e9ebb19b20dbef11ebb3f6be2fdd16732fb WatchSource:0}: Error finding container a066d64403d422c76c2fc86e42028e9ebb19b20dbef11ebb3f6be2fdd16732fb: Status 404 returned error can't find the container with id a066d64403d422c76c2fc86e42028e9ebb19b20dbef11ebb3f6be2fdd16732fb Apr 16 18:30:29.167171 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:29.167133 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-32270-predictor-5ffbcf999-7bnlw" event={"ID":"abc49327-fc63-43bc-a0af-0d8f8faf577f","Type":"ContainerStarted","Data":"4cee7f67bf285d24e61fea230220ef9bd2c0e6b007fb2aecf323296805fd3a66"} Apr 16 18:30:29.167617 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:29.167179 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-32270-predictor-5ffbcf999-7bnlw" event={"ID":"abc49327-fc63-43bc-a0af-0d8f8faf577f","Type":"ContainerStarted","Data":"a066d64403d422c76c2fc86e42028e9ebb19b20dbef11ebb3f6be2fdd16732fb"} Apr 16 18:30:29.168633 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:29.168603 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-32270-predictor-7c65ccdf94-kwc9z" event={"ID":"4f7d7451-5b7d-4c20-9b30-787c22d2db07","Type":"ContainerStarted","Data":"31050e5418bc92a76f3e0b8e82c4131fdcf9568b0668b9df6f038ef1c40a4923"} Apr 16 18:30:29.168791 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:29.168638 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-32270-predictor-7c65ccdf94-kwc9z" event={"ID":"4f7d7451-5b7d-4c20-9b30-787c22d2db07","Type":"ContainerStarted","Data":"af49b9f54bb8f1a29fe821388b81ee691283ae2de768c770c7834f730ac67805"} Apr 16 18:30:29.941820 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:29.941767 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-e9c91-predictor-5d78db9f87-v4sjw" podUID="db513adf-fc55-4a7d-994d-65a2d17805b8" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.19:8080: connect: connection refused" Apr 16 18:30:31.769931 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:31.769908 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-e9c91-predictor-5d78db9f87-v4sjw" Apr 16 18:30:31.895162 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:31.895136 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/db513adf-fc55-4a7d-994d-65a2d17805b8-kserve-provision-location\") pod \"db513adf-fc55-4a7d-994d-65a2d17805b8\" (UID: \"db513adf-fc55-4a7d-994d-65a2d17805b8\") " Apr 16 18:30:31.895525 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:31.895489 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db513adf-fc55-4a7d-994d-65a2d17805b8-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "db513adf-fc55-4a7d-994d-65a2d17805b8" (UID: "db513adf-fc55-4a7d-994d-65a2d17805b8"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 18:30:31.995751 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:31.995714 2570 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/db513adf-fc55-4a7d-994d-65a2d17805b8-kserve-provision-location\") on node \"ip-10-0-141-219.ec2.internal\" DevicePath \"\"" Apr 16 18:30:32.002972 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:32.002950 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-e9c91-predictor-547f94d7c8-6pj9n" Apr 16 18:30:32.096351 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:32.096283 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/87e2bd64-8f94-4af3-9811-3aaa021efaf9-kserve-provision-location\") pod \"87e2bd64-8f94-4af3-9811-3aaa021efaf9\" (UID: \"87e2bd64-8f94-4af3-9811-3aaa021efaf9\") " Apr 16 18:30:32.096543 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:32.096522 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87e2bd64-8f94-4af3-9811-3aaa021efaf9-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "87e2bd64-8f94-4af3-9811-3aaa021efaf9" (UID: "87e2bd64-8f94-4af3-9811-3aaa021efaf9"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 18:30:32.149134 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:32.149098 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-raw-e9c91-d688d7fc9-tdzkf" podUID="fd5911f6-35db-4fbb-b0c2-83687a948642" containerName="model-chainer-raw-e9c91" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:30:32.178173 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:32.178142 2570 generic.go:358] "Generic (PLEG): container finished" podID="db513adf-fc55-4a7d-994d-65a2d17805b8" containerID="f6c0392a8de49441f9df2e3a01253ce4ae6895884097dcf5895ed8b49fb738b6" exitCode=0 Apr 16 18:30:32.178307 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:32.178215 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-e9c91-predictor-5d78db9f87-v4sjw" event={"ID":"db513adf-fc55-4a7d-994d-65a2d17805b8","Type":"ContainerDied","Data":"f6c0392a8de49441f9df2e3a01253ce4ae6895884097dcf5895ed8b49fb738b6"} Apr 16 18:30:32.178307 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:32.178219 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-e9c91-predictor-5d78db9f87-v4sjw" Apr 16 18:30:32.178307 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:32.178271 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-e9c91-predictor-5d78db9f87-v4sjw" event={"ID":"db513adf-fc55-4a7d-994d-65a2d17805b8","Type":"ContainerDied","Data":"60880e6a16f1e5f0e218830341caaf2e66c3f26e6e72fd1ed584297a471c57df"} Apr 16 18:30:32.178307 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:32.178293 2570 scope.go:117] "RemoveContainer" containerID="f6c0392a8de49441f9df2e3a01253ce4ae6895884097dcf5895ed8b49fb738b6" Apr 16 18:30:32.179605 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:32.179578 2570 generic.go:358] "Generic (PLEG): container finished" podID="4f7d7451-5b7d-4c20-9b30-787c22d2db07" containerID="31050e5418bc92a76f3e0b8e82c4131fdcf9568b0668b9df6f038ef1c40a4923" exitCode=0 Apr 16 18:30:32.179711 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:32.179651 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-32270-predictor-7c65ccdf94-kwc9z" event={"ID":"4f7d7451-5b7d-4c20-9b30-787c22d2db07","Type":"ContainerDied","Data":"31050e5418bc92a76f3e0b8e82c4131fdcf9568b0668b9df6f038ef1c40a4923"} Apr 16 18:30:32.181199 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:32.181179 2570 generic.go:358] "Generic (PLEG): container finished" podID="abc49327-fc63-43bc-a0af-0d8f8faf577f" containerID="4cee7f67bf285d24e61fea230220ef9bd2c0e6b007fb2aecf323296805fd3a66" exitCode=0 Apr 16 18:30:32.181287 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:32.181257 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-32270-predictor-5ffbcf999-7bnlw" event={"ID":"abc49327-fc63-43bc-a0af-0d8f8faf577f","Type":"ContainerDied","Data":"4cee7f67bf285d24e61fea230220ef9bd2c0e6b007fb2aecf323296805fd3a66"} Apr 16 18:30:32.183065 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:32.183040 2570 generic.go:358] "Generic (PLEG): container finished" podID="87e2bd64-8f94-4af3-9811-3aaa021efaf9" containerID="6c99a334ba02a913bf3db2894a3d0f451c669b686b0e85f1be4c32036b7aa8db" exitCode=0 Apr 16 18:30:32.183162 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:32.183074 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-e9c91-predictor-547f94d7c8-6pj9n" event={"ID":"87e2bd64-8f94-4af3-9811-3aaa021efaf9","Type":"ContainerDied","Data":"6c99a334ba02a913bf3db2894a3d0f451c669b686b0e85f1be4c32036b7aa8db"} Apr 16 18:30:32.183162 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:32.183095 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-e9c91-predictor-547f94d7c8-6pj9n" event={"ID":"87e2bd64-8f94-4af3-9811-3aaa021efaf9","Type":"ContainerDied","Data":"ff999b19848dc73183c7695f6c9ed75c11c8f3ef75f783499563e22a7b5fc40e"} Apr 16 18:30:32.183162 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:32.183122 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-e9c91-predictor-547f94d7c8-6pj9n" Apr 16 18:30:32.187151 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:32.187130 2570 scope.go:117] "RemoveContainer" containerID="5983729725ef376d413c95826e750cea9347c4ceae1d57fa2565fba6a9ddf39f" Apr 16 18:30:32.194836 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:32.194819 2570 scope.go:117] "RemoveContainer" containerID="f6c0392a8de49441f9df2e3a01253ce4ae6895884097dcf5895ed8b49fb738b6" Apr 16 18:30:32.195162 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:30:32.195144 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6c0392a8de49441f9df2e3a01253ce4ae6895884097dcf5895ed8b49fb738b6\": container with ID starting with f6c0392a8de49441f9df2e3a01253ce4ae6895884097dcf5895ed8b49fb738b6 not found: ID does not exist" containerID="f6c0392a8de49441f9df2e3a01253ce4ae6895884097dcf5895ed8b49fb738b6" Apr 16 18:30:32.195238 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:32.195169 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6c0392a8de49441f9df2e3a01253ce4ae6895884097dcf5895ed8b49fb738b6"} err="failed to get container status \"f6c0392a8de49441f9df2e3a01253ce4ae6895884097dcf5895ed8b49fb738b6\": rpc error: code = NotFound desc = could not find container \"f6c0392a8de49441f9df2e3a01253ce4ae6895884097dcf5895ed8b49fb738b6\": container with ID starting with f6c0392a8de49441f9df2e3a01253ce4ae6895884097dcf5895ed8b49fb738b6 not found: ID does not exist" Apr 16 18:30:32.195238 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:32.195186 2570 scope.go:117] "RemoveContainer" containerID="5983729725ef376d413c95826e750cea9347c4ceae1d57fa2565fba6a9ddf39f" Apr 16 18:30:32.195396 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:30:32.195382 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5983729725ef376d413c95826e750cea9347c4ceae1d57fa2565fba6a9ddf39f\": container with ID starting with 5983729725ef376d413c95826e750cea9347c4ceae1d57fa2565fba6a9ddf39f not found: ID does not exist" containerID="5983729725ef376d413c95826e750cea9347c4ceae1d57fa2565fba6a9ddf39f" Apr 16 18:30:32.195429 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:32.195399 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5983729725ef376d413c95826e750cea9347c4ceae1d57fa2565fba6a9ddf39f"} err="failed to get container status \"5983729725ef376d413c95826e750cea9347c4ceae1d57fa2565fba6a9ddf39f\": rpc error: code = NotFound desc = could not find container \"5983729725ef376d413c95826e750cea9347c4ceae1d57fa2565fba6a9ddf39f\": container with ID starting with 5983729725ef376d413c95826e750cea9347c4ceae1d57fa2565fba6a9ddf39f not found: ID does not exist" Apr 16 18:30:32.195429 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:32.195412 2570 scope.go:117] "RemoveContainer" containerID="6c99a334ba02a913bf3db2894a3d0f451c669b686b0e85f1be4c32036b7aa8db" Apr 16 18:30:32.197017 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:32.196993 2570 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/87e2bd64-8f94-4af3-9811-3aaa021efaf9-kserve-provision-location\") on node \"ip-10-0-141-219.ec2.internal\" DevicePath \"\"" Apr 16 18:30:32.203419 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:32.203398 2570 scope.go:117] "RemoveContainer" containerID="bf929ef1ed056b7f9aff9e16a6f1feab793cdd21e7e510530b7102a9a8aac2a5" Apr 16 18:30:32.216245 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:32.216209 2570 scope.go:117] "RemoveContainer" containerID="6c99a334ba02a913bf3db2894a3d0f451c669b686b0e85f1be4c32036b7aa8db" Apr 16 18:30:32.216562 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:30:32.216540 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c99a334ba02a913bf3db2894a3d0f451c669b686b0e85f1be4c32036b7aa8db\": container with ID starting with 6c99a334ba02a913bf3db2894a3d0f451c669b686b0e85f1be4c32036b7aa8db not found: ID does not exist" containerID="6c99a334ba02a913bf3db2894a3d0f451c669b686b0e85f1be4c32036b7aa8db" Apr 16 18:30:32.216653 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:32.216571 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c99a334ba02a913bf3db2894a3d0f451c669b686b0e85f1be4c32036b7aa8db"} err="failed to get container status \"6c99a334ba02a913bf3db2894a3d0f451c669b686b0e85f1be4c32036b7aa8db\": rpc error: code = NotFound desc = could not find container \"6c99a334ba02a913bf3db2894a3d0f451c669b686b0e85f1be4c32036b7aa8db\": container with ID starting with 6c99a334ba02a913bf3db2894a3d0f451c669b686b0e85f1be4c32036b7aa8db not found: ID does not exist" Apr 16 18:30:32.216653 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:32.216593 2570 scope.go:117] "RemoveContainer" containerID="bf929ef1ed056b7f9aff9e16a6f1feab793cdd21e7e510530b7102a9a8aac2a5" Apr 16 18:30:32.216856 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:30:32.216833 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf929ef1ed056b7f9aff9e16a6f1feab793cdd21e7e510530b7102a9a8aac2a5\": container with ID starting with bf929ef1ed056b7f9aff9e16a6f1feab793cdd21e7e510530b7102a9a8aac2a5 not found: ID does not exist" containerID="bf929ef1ed056b7f9aff9e16a6f1feab793cdd21e7e510530b7102a9a8aac2a5" Apr 16 18:30:32.216915 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:32.216865 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf929ef1ed056b7f9aff9e16a6f1feab793cdd21e7e510530b7102a9a8aac2a5"} err="failed to get container status \"bf929ef1ed056b7f9aff9e16a6f1feab793cdd21e7e510530b7102a9a8aac2a5\": rpc error: code = NotFound desc = could not find container \"bf929ef1ed056b7f9aff9e16a6f1feab793cdd21e7e510530b7102a9a8aac2a5\": container with ID starting with bf929ef1ed056b7f9aff9e16a6f1feab793cdd21e7e510530b7102a9a8aac2a5 not found: ID does not exist" Apr 16 18:30:32.224155 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:32.224135 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-e9c91-predictor-547f94d7c8-6pj9n"] Apr 16 18:30:32.228074 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:32.228053 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-e9c91-predictor-547f94d7c8-6pj9n"] Apr 16 18:30:32.243308 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:32.242278 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-e9c91-predictor-5d78db9f87-v4sjw"] Apr 16 18:30:32.245334 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:32.245263 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-e9c91-predictor-5d78db9f87-v4sjw"] Apr 16 18:30:33.188862 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:33.188825 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-32270-predictor-7c65ccdf94-kwc9z" event={"ID":"4f7d7451-5b7d-4c20-9b30-787c22d2db07","Type":"ContainerStarted","Data":"2c12f813df89196b5bdda816b3643a116c7f30db262b25fa993ce76efeccddca"} Apr 16 18:30:33.189333 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:33.189142 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-32270-predictor-7c65ccdf94-kwc9z" Apr 16 18:30:33.190336 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:33.190309 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-32270-predictor-5ffbcf999-7bnlw" event={"ID":"abc49327-fc63-43bc-a0af-0d8f8faf577f","Type":"ContainerStarted","Data":"f98f759fafab36e664e74dd60b6a9ec3b315ec746708c360b7218bc28f63862c"} Apr 16 18:30:33.190473 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:33.190445 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-32270-predictor-7c65ccdf94-kwc9z" podUID="4f7d7451-5b7d-4c20-9b30-787c22d2db07" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.21:8080: connect: connection refused" Apr 16 18:30:33.190595 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:33.190579 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-32270-predictor-5ffbcf999-7bnlw" Apr 16 18:30:33.191478 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:33.191456 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-32270-predictor-5ffbcf999-7bnlw" podUID="abc49327-fc63-43bc-a0af-0d8f8faf577f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.22:8080: connect: connection refused" Apr 16 18:30:33.205677 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:33.205636 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-32270-predictor-7c65ccdf94-kwc9z" podStartSLOduration=6.205624913 podStartE2EDuration="6.205624913s" podCreationTimestamp="2026-04-16 18:30:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:30:33.204387015 +0000 UTC m=+864.010139815" watchObservedRunningTime="2026-04-16 18:30:33.205624913 +0000 UTC m=+864.011377714" Apr 16 18:30:33.222029 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:33.221991 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-32270-predictor-5ffbcf999-7bnlw" podStartSLOduration=6.22197732 podStartE2EDuration="6.22197732s" podCreationTimestamp="2026-04-16 18:30:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:30:33.220680614 +0000 UTC m=+864.026433415" watchObservedRunningTime="2026-04-16 18:30:33.22197732 +0000 UTC m=+864.027730122" Apr 16 18:30:33.780804 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:33.780774 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87e2bd64-8f94-4af3-9811-3aaa021efaf9" path="/var/lib/kubelet/pods/87e2bd64-8f94-4af3-9811-3aaa021efaf9/volumes" Apr 16 18:30:33.781125 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:33.781113 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db513adf-fc55-4a7d-994d-65a2d17805b8" path="/var/lib/kubelet/pods/db513adf-fc55-4a7d-994d-65a2d17805b8/volumes" Apr 16 18:30:34.192893 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:34.192852 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-32270-predictor-5ffbcf999-7bnlw" podUID="abc49327-fc63-43bc-a0af-0d8f8faf577f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.22:8080: connect: connection refused" Apr 16 18:30:34.193313 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:34.193009 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-32270-predictor-7c65ccdf94-kwc9z" podUID="4f7d7451-5b7d-4c20-9b30-787c22d2db07" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.21:8080: connect: connection refused" Apr 16 18:30:37.149179 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:37.149136 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-raw-e9c91-d688d7fc9-tdzkf" podUID="fd5911f6-35db-4fbb-b0c2-83687a948642" containerName="model-chainer-raw-e9c91" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:30:42.149272 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:42.149213 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-raw-e9c91-d688d7fc9-tdzkf" podUID="fd5911f6-35db-4fbb-b0c2-83687a948642" containerName="model-chainer-raw-e9c91" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:30:42.149685 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:42.149348 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/model-chainer-raw-e9c91-d688d7fc9-tdzkf" Apr 16 18:30:44.193428 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:44.193382 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-32270-predictor-7c65ccdf94-kwc9z" podUID="4f7d7451-5b7d-4c20-9b30-787c22d2db07" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.21:8080: connect: connection refused" Apr 16 18:30:44.193822 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:44.193382 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-32270-predictor-5ffbcf999-7bnlw" podUID="abc49327-fc63-43bc-a0af-0d8f8faf577f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.22:8080: connect: connection refused" Apr 16 18:30:47.149559 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:47.149523 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-raw-e9c91-d688d7fc9-tdzkf" podUID="fd5911f6-35db-4fbb-b0c2-83687a948642" containerName="model-chainer-raw-e9c91" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:30:52.149033 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:52.148996 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-raw-e9c91-d688d7fc9-tdzkf" podUID="fd5911f6-35db-4fbb-b0c2-83687a948642" containerName="model-chainer-raw-e9c91" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:30:54.193572 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:54.193522 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-32270-predictor-7c65ccdf94-kwc9z" podUID="4f7d7451-5b7d-4c20-9b30-787c22d2db07" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.21:8080: connect: connection refused" Apr 16 18:30:54.193970 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:54.193522 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-32270-predictor-5ffbcf999-7bnlw" podUID="abc49327-fc63-43bc-a0af-0d8f8faf577f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.22:8080: connect: connection refused" Apr 16 18:30:57.149363 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:57.149321 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-raw-e9c91-d688d7fc9-tdzkf" podUID="fd5911f6-35db-4fbb-b0c2-83687a948642" containerName="model-chainer-raw-e9c91" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:30:58.134481 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:58.134458 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-raw-e9c91-d688d7fc9-tdzkf" Apr 16 18:30:58.261117 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:58.261033 2570 generic.go:358] "Generic (PLEG): container finished" podID="fd5911f6-35db-4fbb-b0c2-83687a948642" containerID="4622a19a2b36dfa1bc49df610456945ea402603c90f4365bf06da7e86d3e22e7" exitCode=0 Apr 16 18:30:58.261117 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:58.261083 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-raw-e9c91-d688d7fc9-tdzkf" event={"ID":"fd5911f6-35db-4fbb-b0c2-83687a948642","Type":"ContainerDied","Data":"4622a19a2b36dfa1bc49df610456945ea402603c90f4365bf06da7e86d3e22e7"} Apr 16 18:30:58.261117 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:58.261099 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-raw-e9c91-d688d7fc9-tdzkf" Apr 16 18:30:58.261117 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:58.261117 2570 scope.go:117] "RemoveContainer" containerID="4622a19a2b36dfa1bc49df610456945ea402603c90f4365bf06da7e86d3e22e7" Apr 16 18:30:58.261712 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:58.261106 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-raw-e9c91-d688d7fc9-tdzkf" event={"ID":"fd5911f6-35db-4fbb-b0c2-83687a948642","Type":"ContainerDied","Data":"57bd36f1bc23b01d952c8a03d6a5aab5089d97b6fd0409fed6ddd0953a12c1c2"} Apr 16 18:30:58.268468 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:58.268450 2570 scope.go:117] "RemoveContainer" containerID="4622a19a2b36dfa1bc49df610456945ea402603c90f4365bf06da7e86d3e22e7" Apr 16 18:30:58.268727 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:30:58.268701 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4622a19a2b36dfa1bc49df610456945ea402603c90f4365bf06da7e86d3e22e7\": container with ID starting with 4622a19a2b36dfa1bc49df610456945ea402603c90f4365bf06da7e86d3e22e7 not found: ID does not exist" containerID="4622a19a2b36dfa1bc49df610456945ea402603c90f4365bf06da7e86d3e22e7" Apr 16 18:30:58.268788 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:58.268740 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4622a19a2b36dfa1bc49df610456945ea402603c90f4365bf06da7e86d3e22e7"} err="failed to get container status \"4622a19a2b36dfa1bc49df610456945ea402603c90f4365bf06da7e86d3e22e7\": rpc error: code = NotFound desc = could not find container \"4622a19a2b36dfa1bc49df610456945ea402603c90f4365bf06da7e86d3e22e7\": container with ID starting with 4622a19a2b36dfa1bc49df610456945ea402603c90f4365bf06da7e86d3e22e7 not found: ID does not exist" Apr 16 18:30:58.293079 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:58.293059 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fd5911f6-35db-4fbb-b0c2-83687a948642-proxy-tls\") pod \"fd5911f6-35db-4fbb-b0c2-83687a948642\" (UID: \"fd5911f6-35db-4fbb-b0c2-83687a948642\") " Apr 16 18:30:58.293137 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:58.293112 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd5911f6-35db-4fbb-b0c2-83687a948642-openshift-service-ca-bundle\") pod \"fd5911f6-35db-4fbb-b0c2-83687a948642\" (UID: \"fd5911f6-35db-4fbb-b0c2-83687a948642\") " Apr 16 18:30:58.293475 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:58.293450 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd5911f6-35db-4fbb-b0c2-83687a948642-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "fd5911f6-35db-4fbb-b0c2-83687a948642" (UID: "fd5911f6-35db-4fbb-b0c2-83687a948642"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 18:30:58.295243 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:58.295208 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd5911f6-35db-4fbb-b0c2-83687a948642-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fd5911f6-35db-4fbb-b0c2-83687a948642" (UID: "fd5911f6-35db-4fbb-b0c2-83687a948642"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 18:30:58.393558 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:58.393525 2570 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fd5911f6-35db-4fbb-b0c2-83687a948642-proxy-tls\") on node \"ip-10-0-141-219.ec2.internal\" DevicePath \"\"" Apr 16 18:30:58.393558 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:58.393554 2570 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd5911f6-35db-4fbb-b0c2-83687a948642-openshift-service-ca-bundle\") on node \"ip-10-0-141-219.ec2.internal\" DevicePath \"\"" Apr 16 18:30:58.582177 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:58.582146 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-e9c91-d688d7fc9-tdzkf"] Apr 16 18:30:58.585844 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:58.585815 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-e9c91-d688d7fc9-tdzkf"] Apr 16 18:30:59.781603 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:30:59.781529 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd5911f6-35db-4fbb-b0c2-83687a948642" path="/var/lib/kubelet/pods/fd5911f6-35db-4fbb-b0c2-83687a948642/volumes" Apr 16 18:31:04.193444 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:04.193399 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-32270-predictor-5ffbcf999-7bnlw" podUID="abc49327-fc63-43bc-a0af-0d8f8faf577f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.22:8080: connect: connection refused" Apr 16 18:31:04.193815 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:04.193397 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-32270-predictor-7c65ccdf94-kwc9z" podUID="4f7d7451-5b7d-4c20-9b30-787c22d2db07" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.21:8080: connect: connection refused" Apr 16 18:31:09.704329 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:09.704296 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hls95_9eda7e8d-1d99-41d3-acfb-b6c80829811c/ovn-acl-logging/0.log" Apr 16 18:31:09.706570 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:09.706543 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hls95_9eda7e8d-1d99-41d3-acfb-b6c80829811c/ovn-acl-logging/0.log" Apr 16 18:31:14.193861 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:14.193816 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-32270-predictor-5ffbcf999-7bnlw" podUID="abc49327-fc63-43bc-a0af-0d8f8faf577f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.22:8080: connect: connection refused" Apr 16 18:31:14.193861 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:14.193817 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-32270-predictor-7c65ccdf94-kwc9z" podUID="4f7d7451-5b7d-4c20-9b30-787c22d2db07" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.21:8080: connect: connection refused" Apr 16 18:31:24.193883 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:24.193842 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-32270-predictor-7c65ccdf94-kwc9z" podUID="4f7d7451-5b7d-4c20-9b30-787c22d2db07" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.21:8080: connect: connection refused" Apr 16 18:31:24.194321 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:24.193842 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-32270-predictor-5ffbcf999-7bnlw" podUID="abc49327-fc63-43bc-a0af-0d8f8faf577f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.22:8080: connect: connection refused" Apr 16 18:31:34.193729 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:34.193679 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-32270-predictor-7c65ccdf94-kwc9z" podUID="4f7d7451-5b7d-4c20-9b30-787c22d2db07" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.21:8080: connect: connection refused" Apr 16 18:31:34.194383 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:34.194364 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-32270-predictor-5ffbcf999-7bnlw" Apr 16 18:31:44.194247 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:44.194193 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-32270-predictor-7c65ccdf94-kwc9z" Apr 16 18:31:57.762259 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:57.762206 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-hpa-32270-678bc766cf-d8zx8"] Apr 16 18:31:57.762734 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:57.762591 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="db513adf-fc55-4a7d-994d-65a2d17805b8" containerName="kserve-container" Apr 16 18:31:57.762734 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:57.762610 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="db513adf-fc55-4a7d-994d-65a2d17805b8" containerName="kserve-container" Apr 16 18:31:57.762734 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:57.762636 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="87e2bd64-8f94-4af3-9811-3aaa021efaf9" containerName="kserve-container" Apr 16 18:31:57.762734 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:57.762646 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="87e2bd64-8f94-4af3-9811-3aaa021efaf9" containerName="kserve-container" Apr 16 18:31:57.762734 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:57.762661 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="87e2bd64-8f94-4af3-9811-3aaa021efaf9" containerName="storage-initializer" Apr 16 18:31:57.762734 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:57.762669 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="87e2bd64-8f94-4af3-9811-3aaa021efaf9" containerName="storage-initializer" Apr 16 18:31:57.762734 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:57.762684 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="db513adf-fc55-4a7d-994d-65a2d17805b8" containerName="storage-initializer" Apr 16 18:31:57.762734 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:57.762692 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="db513adf-fc55-4a7d-994d-65a2d17805b8" containerName="storage-initializer" Apr 16 18:31:57.762734 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:57.762700 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fd5911f6-35db-4fbb-b0c2-83687a948642" containerName="model-chainer-raw-e9c91" Apr 16 18:31:57.762734 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:57.762708 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd5911f6-35db-4fbb-b0c2-83687a948642" containerName="model-chainer-raw-e9c91" Apr 16 18:31:57.763193 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:57.762774 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="fd5911f6-35db-4fbb-b0c2-83687a948642" containerName="model-chainer-raw-e9c91" Apr 16 18:31:57.763193 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:57.762788 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="87e2bd64-8f94-4af3-9811-3aaa021efaf9" containerName="kserve-container" Apr 16 18:31:57.763193 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:57.762797 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="db513adf-fc55-4a7d-994d-65a2d17805b8" containerName="kserve-container" Apr 16 18:31:57.765783 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:57.765761 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-32270-678bc766cf-d8zx8" Apr 16 18:31:57.768127 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:57.768107 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"model-chainer-raw-hpa-32270-kube-rbac-proxy-sar-config\"" Apr 16 18:31:57.768276 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:57.768220 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"openshift-service-ca.crt\"" Apr 16 18:31:57.768409 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:57.768335 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"model-chainer-raw-hpa-32270-serving-cert\"" Apr 16 18:31:57.775992 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:57.775969 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-hpa-32270-678bc766cf-d8zx8"] Apr 16 18:31:57.814257 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:57.814196 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7c2bcbda-2193-421c-991f-44c08928d07a-proxy-tls\") pod \"model-chainer-raw-hpa-32270-678bc766cf-d8zx8\" (UID: \"7c2bcbda-2193-421c-991f-44c08928d07a\") " pod="kserve-ci-e2e-test/model-chainer-raw-hpa-32270-678bc766cf-d8zx8" Apr 16 18:31:57.814444 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:57.814354 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c2bcbda-2193-421c-991f-44c08928d07a-openshift-service-ca-bundle\") pod \"model-chainer-raw-hpa-32270-678bc766cf-d8zx8\" (UID: \"7c2bcbda-2193-421c-991f-44c08928d07a\") " pod="kserve-ci-e2e-test/model-chainer-raw-hpa-32270-678bc766cf-d8zx8" Apr 16 18:31:57.915264 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:57.915211 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7c2bcbda-2193-421c-991f-44c08928d07a-proxy-tls\") pod \"model-chainer-raw-hpa-32270-678bc766cf-d8zx8\" (UID: \"7c2bcbda-2193-421c-991f-44c08928d07a\") " pod="kserve-ci-e2e-test/model-chainer-raw-hpa-32270-678bc766cf-d8zx8" Apr 16 18:31:57.915434 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:57.915309 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c2bcbda-2193-421c-991f-44c08928d07a-openshift-service-ca-bundle\") pod \"model-chainer-raw-hpa-32270-678bc766cf-d8zx8\" (UID: \"7c2bcbda-2193-421c-991f-44c08928d07a\") " pod="kserve-ci-e2e-test/model-chainer-raw-hpa-32270-678bc766cf-d8zx8" Apr 16 18:31:57.915434 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:31:57.915388 2570 secret.go:189] Couldn't get secret kserve-ci-e2e-test/model-chainer-raw-hpa-32270-serving-cert: secret "model-chainer-raw-hpa-32270-serving-cert" not found Apr 16 18:31:57.915562 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:31:57.915482 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c2bcbda-2193-421c-991f-44c08928d07a-proxy-tls podName:7c2bcbda-2193-421c-991f-44c08928d07a nodeName:}" failed. No retries permitted until 2026-04-16 18:31:58.415459522 +0000 UTC m=+949.221212322 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/7c2bcbda-2193-421c-991f-44c08928d07a-proxy-tls") pod "model-chainer-raw-hpa-32270-678bc766cf-d8zx8" (UID: "7c2bcbda-2193-421c-991f-44c08928d07a") : secret "model-chainer-raw-hpa-32270-serving-cert" not found Apr 16 18:31:57.915874 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:57.915855 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c2bcbda-2193-421c-991f-44c08928d07a-openshift-service-ca-bundle\") pod \"model-chainer-raw-hpa-32270-678bc766cf-d8zx8\" (UID: \"7c2bcbda-2193-421c-991f-44c08928d07a\") " pod="kserve-ci-e2e-test/model-chainer-raw-hpa-32270-678bc766cf-d8zx8" Apr 16 18:31:58.419281 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:58.419244 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7c2bcbda-2193-421c-991f-44c08928d07a-proxy-tls\") pod \"model-chainer-raw-hpa-32270-678bc766cf-d8zx8\" (UID: \"7c2bcbda-2193-421c-991f-44c08928d07a\") " pod="kserve-ci-e2e-test/model-chainer-raw-hpa-32270-678bc766cf-d8zx8" Apr 16 18:31:58.421779 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:58.421757 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7c2bcbda-2193-421c-991f-44c08928d07a-proxy-tls\") pod \"model-chainer-raw-hpa-32270-678bc766cf-d8zx8\" (UID: \"7c2bcbda-2193-421c-991f-44c08928d07a\") " pod="kserve-ci-e2e-test/model-chainer-raw-hpa-32270-678bc766cf-d8zx8" Apr 16 18:31:58.676346 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:58.676215 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-32270-678bc766cf-d8zx8" Apr 16 18:31:58.791151 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:58.791117 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-hpa-32270-678bc766cf-d8zx8"] Apr 16 18:31:58.794380 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:31:58.794356 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c2bcbda_2193_421c_991f_44c08928d07a.slice/crio-924bec179e552c1fc97a1cb84e0248389819ad6eb367201a571fbd3a91719909 WatchSource:0}: Error finding container 924bec179e552c1fc97a1cb84e0248389819ad6eb367201a571fbd3a91719909: Status 404 returned error can't find the container with id 924bec179e552c1fc97a1cb84e0248389819ad6eb367201a571fbd3a91719909 Apr 16 18:31:59.428538 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:59.428504 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-32270-678bc766cf-d8zx8" event={"ID":"7c2bcbda-2193-421c-991f-44c08928d07a","Type":"ContainerStarted","Data":"67927e90fdbe066f5cd592c7abf113b786606ce625205cbe05aa540c5230f3c7"} Apr 16 18:31:59.428538 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:59.428539 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-32270-678bc766cf-d8zx8" event={"ID":"7c2bcbda-2193-421c-991f-44c08928d07a","Type":"ContainerStarted","Data":"924bec179e552c1fc97a1cb84e0248389819ad6eb367201a571fbd3a91719909"} Apr 16 18:31:59.428749 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:59.428610 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-32270-678bc766cf-d8zx8" Apr 16 18:31:59.445612 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:31:59.445571 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-32270-678bc766cf-d8zx8" podStartSLOduration=2.445558106 podStartE2EDuration="2.445558106s" podCreationTimestamp="2026-04-16 18:31:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:31:59.443739688 +0000 UTC m=+950.249492488" watchObservedRunningTime="2026-04-16 18:31:59.445558106 +0000 UTC m=+950.251310906" Apr 16 18:32:05.437791 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:05.437761 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-32270-678bc766cf-d8zx8" Apr 16 18:32:07.819223 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:07.819189 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-hpa-32270-678bc766cf-d8zx8"] Apr 16 18:32:07.819614 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:07.819526 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-32270-678bc766cf-d8zx8" podUID="7c2bcbda-2193-421c-991f-44c08928d07a" containerName="model-chainer-raw-hpa-32270" containerID="cri-o://67927e90fdbe066f5cd592c7abf113b786606ce625205cbe05aa540c5230f3c7" gracePeriod=30 Apr 16 18:32:07.997165 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:07.997121 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-32270-predictor-7c65ccdf94-kwc9z"] Apr 16 18:32:07.997416 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:07.997394 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-32270-predictor-7c65ccdf94-kwc9z" podUID="4f7d7451-5b7d-4c20-9b30-787c22d2db07" containerName="kserve-container" containerID="cri-o://2c12f813df89196b5bdda816b3643a116c7f30db262b25fa993ce76efeccddca" gracePeriod=30 Apr 16 18:32:08.108513 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:08.108437 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-32270-predictor-5ffbcf999-7bnlw"] Apr 16 18:32:08.108730 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:08.108708 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-32270-predictor-5ffbcf999-7bnlw" podUID="abc49327-fc63-43bc-a0af-0d8f8faf577f" containerName="kserve-container" containerID="cri-o://f98f759fafab36e664e74dd60b6a9ec3b315ec746708c360b7218bc28f63862c" gracePeriod=30 Apr 16 18:32:10.435465 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:10.435430 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-32270-678bc766cf-d8zx8" podUID="7c2bcbda-2193-421c-991f-44c08928d07a" containerName="model-chainer-raw-hpa-32270" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:32:11.460466 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:11.460430 2570 generic.go:358] "Generic (PLEG): container finished" podID="abc49327-fc63-43bc-a0af-0d8f8faf577f" containerID="f98f759fafab36e664e74dd60b6a9ec3b315ec746708c360b7218bc28f63862c" exitCode=0 Apr 16 18:32:11.460919 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:11.460506 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-32270-predictor-5ffbcf999-7bnlw" event={"ID":"abc49327-fc63-43bc-a0af-0d8f8faf577f","Type":"ContainerDied","Data":"f98f759fafab36e664e74dd60b6a9ec3b315ec746708c360b7218bc28f63862c"} Apr 16 18:32:11.545486 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:11.545460 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-32270-predictor-5ffbcf999-7bnlw" Apr 16 18:32:11.613282 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:11.613252 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/abc49327-fc63-43bc-a0af-0d8f8faf577f-kserve-provision-location\") pod \"abc49327-fc63-43bc-a0af-0d8f8faf577f\" (UID: \"abc49327-fc63-43bc-a0af-0d8f8faf577f\") " Apr 16 18:32:11.613490 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:11.613464 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abc49327-fc63-43bc-a0af-0d8f8faf577f-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "abc49327-fc63-43bc-a0af-0d8f8faf577f" (UID: "abc49327-fc63-43bc-a0af-0d8f8faf577f"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 18:32:11.714193 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:11.714159 2570 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/abc49327-fc63-43bc-a0af-0d8f8faf577f-kserve-provision-location\") on node \"ip-10-0-141-219.ec2.internal\" DevicePath \"\"" Apr 16 18:32:12.329342 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:12.329319 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-32270-predictor-7c65ccdf94-kwc9z" Apr 16 18:32:12.418455 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:12.418387 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/4f7d7451-5b7d-4c20-9b30-787c22d2db07-kserve-provision-location\") pod \"4f7d7451-5b7d-4c20-9b30-787c22d2db07\" (UID: \"4f7d7451-5b7d-4c20-9b30-787c22d2db07\") " Apr 16 18:32:12.418733 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:12.418707 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f7d7451-5b7d-4c20-9b30-787c22d2db07-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "4f7d7451-5b7d-4c20-9b30-787c22d2db07" (UID: "4f7d7451-5b7d-4c20-9b30-787c22d2db07"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 18:32:12.464928 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:12.464904 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-32270-predictor-5ffbcf999-7bnlw" Apr 16 18:32:12.465387 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:12.464906 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-32270-predictor-5ffbcf999-7bnlw" event={"ID":"abc49327-fc63-43bc-a0af-0d8f8faf577f","Type":"ContainerDied","Data":"a066d64403d422c76c2fc86e42028e9ebb19b20dbef11ebb3f6be2fdd16732fb"} Apr 16 18:32:12.465387 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:12.465013 2570 scope.go:117] "RemoveContainer" containerID="f98f759fafab36e664e74dd60b6a9ec3b315ec746708c360b7218bc28f63862c" Apr 16 18:32:12.466443 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:12.466416 2570 generic.go:358] "Generic (PLEG): container finished" podID="4f7d7451-5b7d-4c20-9b30-787c22d2db07" containerID="2c12f813df89196b5bdda816b3643a116c7f30db262b25fa993ce76efeccddca" exitCode=0 Apr 16 18:32:12.466537 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:12.466477 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-32270-predictor-7c65ccdf94-kwc9z" Apr 16 18:32:12.466537 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:12.466477 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-32270-predictor-7c65ccdf94-kwc9z" event={"ID":"4f7d7451-5b7d-4c20-9b30-787c22d2db07","Type":"ContainerDied","Data":"2c12f813df89196b5bdda816b3643a116c7f30db262b25fa993ce76efeccddca"} Apr 16 18:32:12.466537 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:12.466503 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-32270-predictor-7c65ccdf94-kwc9z" event={"ID":"4f7d7451-5b7d-4c20-9b30-787c22d2db07","Type":"ContainerDied","Data":"af49b9f54bb8f1a29fe821388b81ee691283ae2de768c770c7834f730ac67805"} Apr 16 18:32:12.472722 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:12.472704 2570 scope.go:117] "RemoveContainer" containerID="4cee7f67bf285d24e61fea230220ef9bd2c0e6b007fb2aecf323296805fd3a66" Apr 16 18:32:12.479553 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:12.479536 2570 scope.go:117] "RemoveContainer" containerID="2c12f813df89196b5bdda816b3643a116c7f30db262b25fa993ce76efeccddca" Apr 16 18:32:12.486902 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:12.486877 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-32270-predictor-5ffbcf999-7bnlw"] Apr 16 18:32:12.486963 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:12.486948 2570 scope.go:117] "RemoveContainer" containerID="31050e5418bc92a76f3e0b8e82c4131fdcf9568b0668b9df6f038ef1c40a4923" Apr 16 18:32:12.489604 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:12.489586 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-32270-predictor-5ffbcf999-7bnlw"] Apr 16 18:32:12.494073 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:12.494056 2570 scope.go:117] "RemoveContainer" containerID="2c12f813df89196b5bdda816b3643a116c7f30db262b25fa993ce76efeccddca" Apr 16 18:32:12.494370 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:32:12.494353 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c12f813df89196b5bdda816b3643a116c7f30db262b25fa993ce76efeccddca\": container with ID starting with 2c12f813df89196b5bdda816b3643a116c7f30db262b25fa993ce76efeccddca not found: ID does not exist" containerID="2c12f813df89196b5bdda816b3643a116c7f30db262b25fa993ce76efeccddca" Apr 16 18:32:12.494418 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:12.494378 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c12f813df89196b5bdda816b3643a116c7f30db262b25fa993ce76efeccddca"} err="failed to get container status \"2c12f813df89196b5bdda816b3643a116c7f30db262b25fa993ce76efeccddca\": rpc error: code = NotFound desc = could not find container \"2c12f813df89196b5bdda816b3643a116c7f30db262b25fa993ce76efeccddca\": container with ID starting with 2c12f813df89196b5bdda816b3643a116c7f30db262b25fa993ce76efeccddca not found: ID does not exist" Apr 16 18:32:12.494418 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:12.494396 2570 scope.go:117] "RemoveContainer" containerID="31050e5418bc92a76f3e0b8e82c4131fdcf9568b0668b9df6f038ef1c40a4923" Apr 16 18:32:12.494656 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:32:12.494638 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31050e5418bc92a76f3e0b8e82c4131fdcf9568b0668b9df6f038ef1c40a4923\": container with ID starting with 31050e5418bc92a76f3e0b8e82c4131fdcf9568b0668b9df6f038ef1c40a4923 not found: ID does not exist" containerID="31050e5418bc92a76f3e0b8e82c4131fdcf9568b0668b9df6f038ef1c40a4923" Apr 16 18:32:12.494717 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:12.494662 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31050e5418bc92a76f3e0b8e82c4131fdcf9568b0668b9df6f038ef1c40a4923"} err="failed to get container status \"31050e5418bc92a76f3e0b8e82c4131fdcf9568b0668b9df6f038ef1c40a4923\": rpc error: code = NotFound desc = could not find container \"31050e5418bc92a76f3e0b8e82c4131fdcf9568b0668b9df6f038ef1c40a4923\": container with ID starting with 31050e5418bc92a76f3e0b8e82c4131fdcf9568b0668b9df6f038ef1c40a4923 not found: ID does not exist" Apr 16 18:32:12.500424 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:12.500403 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-32270-predictor-7c65ccdf94-kwc9z"] Apr 16 18:32:12.504164 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:12.504144 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-32270-predictor-7c65ccdf94-kwc9z"] Apr 16 18:32:12.519082 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:12.519063 2570 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/4f7d7451-5b7d-4c20-9b30-787c22d2db07-kserve-provision-location\") on node \"ip-10-0-141-219.ec2.internal\" DevicePath \"\"" Apr 16 18:32:13.781423 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:13.781387 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f7d7451-5b7d-4c20-9b30-787c22d2db07" path="/var/lib/kubelet/pods/4f7d7451-5b7d-4c20-9b30-787c22d2db07/volumes" Apr 16 18:32:13.781792 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:13.781759 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abc49327-fc63-43bc-a0af-0d8f8faf577f" path="/var/lib/kubelet/pods/abc49327-fc63-43bc-a0af-0d8f8faf577f/volumes" Apr 16 18:32:15.436351 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:15.436310 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-32270-678bc766cf-d8zx8" podUID="7c2bcbda-2193-421c-991f-44c08928d07a" containerName="model-chainer-raw-hpa-32270" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:32:18.040484 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:18.040443 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf"] Apr 16 18:32:18.040963 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:18.040806 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4f7d7451-5b7d-4c20-9b30-787c22d2db07" containerName="kserve-container" Apr 16 18:32:18.040963 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:18.040826 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f7d7451-5b7d-4c20-9b30-787c22d2db07" containerName="kserve-container" Apr 16 18:32:18.040963 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:18.040842 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4f7d7451-5b7d-4c20-9b30-787c22d2db07" containerName="storage-initializer" Apr 16 18:32:18.040963 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:18.040850 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f7d7451-5b7d-4c20-9b30-787c22d2db07" containerName="storage-initializer" Apr 16 18:32:18.040963 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:18.040870 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="abc49327-fc63-43bc-a0af-0d8f8faf577f" containerName="kserve-container" Apr 16 18:32:18.040963 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:18.040878 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="abc49327-fc63-43bc-a0af-0d8f8faf577f" containerName="kserve-container" Apr 16 18:32:18.040963 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:18.040893 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="abc49327-fc63-43bc-a0af-0d8f8faf577f" containerName="storage-initializer" Apr 16 18:32:18.040963 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:18.040901 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="abc49327-fc63-43bc-a0af-0d8f8faf577f" containerName="storage-initializer" Apr 16 18:32:18.040963 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:18.040966 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="abc49327-fc63-43bc-a0af-0d8f8faf577f" containerName="kserve-container" Apr 16 18:32:18.041422 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:18.040979 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="4f7d7451-5b7d-4c20-9b30-787c22d2db07" containerName="kserve-container" Apr 16 18:32:18.044318 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:18.044295 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" Apr 16 18:32:18.054251 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:18.054200 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf"] Apr 16 18:32:18.158685 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:18.158629 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/0c6021fd-aa29-4d75-a316-4660eafebeee-kserve-provision-location\") pod \"isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf\" (UID: \"0c6021fd-aa29-4d75-a316-4660eafebeee\") " pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" Apr 16 18:32:18.259973 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:18.259927 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/0c6021fd-aa29-4d75-a316-4660eafebeee-kserve-provision-location\") pod \"isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf\" (UID: \"0c6021fd-aa29-4d75-a316-4660eafebeee\") " pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" Apr 16 18:32:18.260348 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:18.260326 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/0c6021fd-aa29-4d75-a316-4660eafebeee-kserve-provision-location\") pod \"isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf\" (UID: \"0c6021fd-aa29-4d75-a316-4660eafebeee\") " pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" Apr 16 18:32:18.354920 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:18.354841 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" Apr 16 18:32:18.472877 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:18.472845 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf"] Apr 16 18:32:18.484592 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:18.484568 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" event={"ID":"0c6021fd-aa29-4d75-a316-4660eafebeee","Type":"ContainerStarted","Data":"00fc00de29c66cda4a40d51c6f60fd944cccaaa5e583289ad9c8e3b5f3d63893"} Apr 16 18:32:19.490009 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:19.489971 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" event={"ID":"0c6021fd-aa29-4d75-a316-4660eafebeee","Type":"ContainerStarted","Data":"a04add077659df30f16fecd74ccb8196b9b9af2f51e55c919d3b47b5360cb5c4"} Apr 16 18:32:20.436130 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:20.436088 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-32270-678bc766cf-d8zx8" podUID="7c2bcbda-2193-421c-991f-44c08928d07a" containerName="model-chainer-raw-hpa-32270" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:32:20.436356 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:20.436194 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-32270-678bc766cf-d8zx8" Apr 16 18:32:22.499832 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:22.499798 2570 generic.go:358] "Generic (PLEG): container finished" podID="0c6021fd-aa29-4d75-a316-4660eafebeee" containerID="a04add077659df30f16fecd74ccb8196b9b9af2f51e55c919d3b47b5360cb5c4" exitCode=0 Apr 16 18:32:22.500198 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:22.499862 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" event={"ID":"0c6021fd-aa29-4d75-a316-4660eafebeee","Type":"ContainerDied","Data":"a04add077659df30f16fecd74ccb8196b9b9af2f51e55c919d3b47b5360cb5c4"} Apr 16 18:32:23.505435 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:23.505403 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" event={"ID":"0c6021fd-aa29-4d75-a316-4660eafebeee","Type":"ContainerStarted","Data":"5c9a11f7aeb47f5af28828429da553cdc990ae706832d8656c7c4cb41f277ef9"} Apr 16 18:32:23.505435 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:23.505441 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" event={"ID":"0c6021fd-aa29-4d75-a316-4660eafebeee","Type":"ContainerStarted","Data":"5e2b8ee9c6f30d019d332837df58caf7f8babb98a9b0dac2d527c4378cfbd0c6"} Apr 16 18:32:23.505956 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:23.505847 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" Apr 16 18:32:23.505956 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:23.505876 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" Apr 16 18:32:23.507028 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:23.506989 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" podUID="0c6021fd-aa29-4d75-a316-4660eafebeee" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.24:8080: connect: connection refused" Apr 16 18:32:23.507631 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:23.507606 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" podUID="0c6021fd-aa29-4d75-a316-4660eafebeee" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:32:23.523679 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:23.523635 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" podStartSLOduration=5.523624167 podStartE2EDuration="5.523624167s" podCreationTimestamp="2026-04-16 18:32:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:32:23.522742317 +0000 UTC m=+974.328495117" watchObservedRunningTime="2026-04-16 18:32:23.523624167 +0000 UTC m=+974.329377011" Apr 16 18:32:24.508991 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:24.508950 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" podUID="0c6021fd-aa29-4d75-a316-4660eafebeee" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.24:8080: connect: connection refused" Apr 16 18:32:24.509458 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:24.509266 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" podUID="0c6021fd-aa29-4d75-a316-4660eafebeee" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:32:25.435665 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:25.435626 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-32270-678bc766cf-d8zx8" podUID="7c2bcbda-2193-421c-991f-44c08928d07a" containerName="model-chainer-raw-hpa-32270" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:32:30.435868 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:30.435782 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-32270-678bc766cf-d8zx8" podUID="7c2bcbda-2193-421c-991f-44c08928d07a" containerName="model-chainer-raw-hpa-32270" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:32:32.222651 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:32:32.222610 2570 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/d252e91ad2d85e036b490ed4efc16dbdf0386c285784bb726cd9000e9a20363c/diff" to get inode usage: stat /var/lib/containers/storage/overlay/d252e91ad2d85e036b490ed4efc16dbdf0386c285784bb726cd9000e9a20363c/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kserve-ci-e2e-test_isvc-sklearn-graph-raw-hpa-32270-predictor-7c65ccdf94-kwc9z_4f7d7451-5b7d-4c20-9b30-787c22d2db07/kserve-container/0.log" to get inode usage: stat /var/log/pods/kserve-ci-e2e-test_isvc-sklearn-graph-raw-hpa-32270-predictor-7c65ccdf94-kwc9z_4f7d7451-5b7d-4c20-9b30-787c22d2db07/kserve-container/0.log: no such file or directory Apr 16 18:32:32.226747 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:32:32.226727 2570 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/c0b9321cca6158eb0720b59d42aebc1604c78cb7a7337b7a17b822c5c014d151/diff" to get inode usage: stat /var/lib/containers/storage/overlay/c0b9321cca6158eb0720b59d42aebc1604c78cb7a7337b7a17b822c5c014d151/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kserve-ci-e2e-test_isvc-xgboost-graph-raw-hpa-32270-predictor-5ffbcf999-7bnlw_abc49327-fc63-43bc-a0af-0d8f8faf577f/kserve-container/0.log" to get inode usage: stat /var/log/pods/kserve-ci-e2e-test_isvc-xgboost-graph-raw-hpa-32270-predictor-5ffbcf999-7bnlw_abc49327-fc63-43bc-a0af-0d8f8faf577f/kserve-container/0.log: no such file or directory Apr 16 18:32:34.509449 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:34.509396 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" podUID="0c6021fd-aa29-4d75-a316-4660eafebeee" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.24:8080: connect: connection refused" Apr 16 18:32:34.509874 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:34.509799 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" podUID="0c6021fd-aa29-4d75-a316-4660eafebeee" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:32:35.435826 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:35.435789 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-32270-678bc766cf-d8zx8" podUID="7c2bcbda-2193-421c-991f-44c08928d07a" containerName="model-chainer-raw-hpa-32270" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:32:37.837809 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:32:37.837743 2570 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabc49327_fc63_43bc_a0af_0d8f8faf577f.slice/crio-conmon-f98f759fafab36e664e74dd60b6a9ec3b315ec746708c360b7218bc28f63862c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabc49327_fc63_43bc_a0af_0d8f8faf577f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabc49327_fc63_43bc_a0af_0d8f8faf577f.slice/crio-a066d64403d422c76c2fc86e42028e9ebb19b20dbef11ebb3f6be2fdd16732fb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f7d7451_5b7d_4c20_9b30_787c22d2db07.slice/crio-af49b9f54bb8f1a29fe821388b81ee691283ae2de768c770c7834f730ac67805\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabc49327_fc63_43bc_a0af_0d8f8faf577f.slice/crio-f98f759fafab36e664e74dd60b6a9ec3b315ec746708c360b7218bc28f63862c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f7d7451_5b7d_4c20_9b30_787c22d2db07.slice/crio-2c12f813df89196b5bdda816b3643a116c7f30db262b25fa993ce76efeccddca.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f7d7451_5b7d_4c20_9b30_787c22d2db07.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f7d7451_5b7d_4c20_9b30_787c22d2db07.slice/crio-conmon-2c12f813df89196b5bdda816b3643a116c7f30db262b25fa993ce76efeccddca.scope\": RecentStats: unable to find data in memory cache]" Apr 16 18:32:37.838386 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:32:37.838355 2570 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c6021fd_aa29_4d75_a316_4660eafebeee.slice/crio-conmon-a04add077659df30f16fecd74ccb8196b9b9af2f51e55c919d3b47b5360cb5c4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c6021fd_aa29_4d75_a316_4660eafebeee.slice/crio-a04add077659df30f16fecd74ccb8196b9b9af2f51e55c919d3b47b5360cb5c4.scope\": RecentStats: unable to find data in memory cache]" Apr 16 18:32:37.838500 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:32:37.837758 2570 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c2bcbda_2193_421c_991f_44c08928d07a.slice/crio-924bec179e552c1fc97a1cb84e0248389819ad6eb367201a571fbd3a91719909\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f7d7451_5b7d_4c20_9b30_787c22d2db07.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f7d7451_5b7d_4c20_9b30_787c22d2db07.slice/crio-af49b9f54bb8f1a29fe821388b81ee691283ae2de768c770c7834f730ac67805\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabc49327_fc63_43bc_a0af_0d8f8faf577f.slice/crio-a066d64403d422c76c2fc86e42028e9ebb19b20dbef11ebb3f6be2fdd16732fb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabc49327_fc63_43bc_a0af_0d8f8faf577f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabc49327_fc63_43bc_a0af_0d8f8faf577f.slice/crio-conmon-f98f759fafab36e664e74dd60b6a9ec3b315ec746708c360b7218bc28f63862c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f7d7451_5b7d_4c20_9b30_787c22d2db07.slice/crio-2c12f813df89196b5bdda816b3643a116c7f30db262b25fa993ce76efeccddca.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f7d7451_5b7d_4c20_9b30_787c22d2db07.slice/crio-conmon-2c12f813df89196b5bdda816b3643a116c7f30db262b25fa993ce76efeccddca.scope\": RecentStats: unable to find data in memory cache]" Apr 16 18:32:37.838576 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:32:37.837917 2570 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabc49327_fc63_43bc_a0af_0d8f8faf577f.slice/crio-conmon-f98f759fafab36e664e74dd60b6a9ec3b315ec746708c360b7218bc28f63862c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabc49327_fc63_43bc_a0af_0d8f8faf577f.slice/crio-f98f759fafab36e664e74dd60b6a9ec3b315ec746708c360b7218bc28f63862c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f7d7451_5b7d_4c20_9b30_787c22d2db07.slice/crio-af49b9f54bb8f1a29fe821388b81ee691283ae2de768c770c7834f730ac67805\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabc49327_fc63_43bc_a0af_0d8f8faf577f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f7d7451_5b7d_4c20_9b30_787c22d2db07.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f7d7451_5b7d_4c20_9b30_787c22d2db07.slice/crio-conmon-2c12f813df89196b5bdda816b3643a116c7f30db262b25fa993ce76efeccddca.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabc49327_fc63_43bc_a0af_0d8f8faf577f.slice/crio-a066d64403d422c76c2fc86e42028e9ebb19b20dbef11ebb3f6be2fdd16732fb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f7d7451_5b7d_4c20_9b30_787c22d2db07.slice/crio-2c12f813df89196b5bdda816b3643a116c7f30db262b25fa993ce76efeccddca.scope\": RecentStats: unable to find data in memory cache]" Apr 16 18:32:37.838752 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:32:37.837838 2570 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f7d7451_5b7d_4c20_9b30_787c22d2db07.slice/crio-conmon-2c12f813df89196b5bdda816b3643a116c7f30db262b25fa993ce76efeccddca.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f7d7451_5b7d_4c20_9b30_787c22d2db07.slice/crio-af49b9f54bb8f1a29fe821388b81ee691283ae2de768c770c7834f730ac67805\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabc49327_fc63_43bc_a0af_0d8f8faf577f.slice/crio-a066d64403d422c76c2fc86e42028e9ebb19b20dbef11ebb3f6be2fdd16732fb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f7d7451_5b7d_4c20_9b30_787c22d2db07.slice/crio-2c12f813df89196b5bdda816b3643a116c7f30db262b25fa993ce76efeccddca.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabc49327_fc63_43bc_a0af_0d8f8faf577f.slice/crio-f98f759fafab36e664e74dd60b6a9ec3b315ec746708c360b7218bc28f63862c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabc49327_fc63_43bc_a0af_0d8f8faf577f.slice/crio-conmon-f98f759fafab36e664e74dd60b6a9ec3b315ec746708c360b7218bc28f63862c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabc49327_fc63_43bc_a0af_0d8f8faf577f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f7d7451_5b7d_4c20_9b30_787c22d2db07.slice\": RecentStats: unable to find data in memory cache]" Apr 16 18:32:38.458617 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:38.458591 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-32270-678bc766cf-d8zx8" Apr 16 18:32:38.547106 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:38.547071 2570 generic.go:358] "Generic (PLEG): container finished" podID="7c2bcbda-2193-421c-991f-44c08928d07a" containerID="67927e90fdbe066f5cd592c7abf113b786606ce625205cbe05aa540c5230f3c7" exitCode=0 Apr 16 18:32:38.547291 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:38.547140 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-32270-678bc766cf-d8zx8" Apr 16 18:32:38.547291 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:38.547139 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-32270-678bc766cf-d8zx8" event={"ID":"7c2bcbda-2193-421c-991f-44c08928d07a","Type":"ContainerDied","Data":"67927e90fdbe066f5cd592c7abf113b786606ce625205cbe05aa540c5230f3c7"} Apr 16 18:32:38.547291 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:38.547270 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-32270-678bc766cf-d8zx8" event={"ID":"7c2bcbda-2193-421c-991f-44c08928d07a","Type":"ContainerDied","Data":"924bec179e552c1fc97a1cb84e0248389819ad6eb367201a571fbd3a91719909"} Apr 16 18:32:38.547291 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:38.547291 2570 scope.go:117] "RemoveContainer" containerID="67927e90fdbe066f5cd592c7abf113b786606ce625205cbe05aa540c5230f3c7" Apr 16 18:32:38.554655 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:38.554639 2570 scope.go:117] "RemoveContainer" containerID="67927e90fdbe066f5cd592c7abf113b786606ce625205cbe05aa540c5230f3c7" Apr 16 18:32:38.554900 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:32:38.554881 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67927e90fdbe066f5cd592c7abf113b786606ce625205cbe05aa540c5230f3c7\": container with ID starting with 67927e90fdbe066f5cd592c7abf113b786606ce625205cbe05aa540c5230f3c7 not found: ID does not exist" containerID="67927e90fdbe066f5cd592c7abf113b786606ce625205cbe05aa540c5230f3c7" Apr 16 18:32:38.554948 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:38.554907 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67927e90fdbe066f5cd592c7abf113b786606ce625205cbe05aa540c5230f3c7"} err="failed to get container status \"67927e90fdbe066f5cd592c7abf113b786606ce625205cbe05aa540c5230f3c7\": rpc error: code = NotFound desc = could not find container \"67927e90fdbe066f5cd592c7abf113b786606ce625205cbe05aa540c5230f3c7\": container with ID starting with 67927e90fdbe066f5cd592c7abf113b786606ce625205cbe05aa540c5230f3c7 not found: ID does not exist" Apr 16 18:32:38.613005 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:38.612933 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c2bcbda-2193-421c-991f-44c08928d07a-openshift-service-ca-bundle\") pod \"7c2bcbda-2193-421c-991f-44c08928d07a\" (UID: \"7c2bcbda-2193-421c-991f-44c08928d07a\") " Apr 16 18:32:38.613005 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:38.612990 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7c2bcbda-2193-421c-991f-44c08928d07a-proxy-tls\") pod \"7c2bcbda-2193-421c-991f-44c08928d07a\" (UID: \"7c2bcbda-2193-421c-991f-44c08928d07a\") " Apr 16 18:32:38.613299 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:38.613276 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c2bcbda-2193-421c-991f-44c08928d07a-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "7c2bcbda-2193-421c-991f-44c08928d07a" (UID: "7c2bcbda-2193-421c-991f-44c08928d07a"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 18:32:38.614912 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:38.614883 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c2bcbda-2193-421c-991f-44c08928d07a-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "7c2bcbda-2193-421c-991f-44c08928d07a" (UID: "7c2bcbda-2193-421c-991f-44c08928d07a"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 18:32:38.714221 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:38.714170 2570 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c2bcbda-2193-421c-991f-44c08928d07a-openshift-service-ca-bundle\") on node \"ip-10-0-141-219.ec2.internal\" DevicePath \"\"" Apr 16 18:32:38.714221 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:38.714216 2570 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7c2bcbda-2193-421c-991f-44c08928d07a-proxy-tls\") on node \"ip-10-0-141-219.ec2.internal\" DevicePath \"\"" Apr 16 18:32:38.868555 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:38.868461 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-hpa-32270-678bc766cf-d8zx8"] Apr 16 18:32:38.871492 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:38.871470 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-hpa-32270-678bc766cf-d8zx8"] Apr 16 18:32:39.781590 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:39.781558 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c2bcbda-2193-421c-991f-44c08928d07a" path="/var/lib/kubelet/pods/7c2bcbda-2193-421c-991f-44c08928d07a/volumes" Apr 16 18:32:44.509552 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:44.509513 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" podUID="0c6021fd-aa29-4d75-a316-4660eafebeee" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.24:8080: connect: connection refused" Apr 16 18:32:44.510022 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:44.509953 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" podUID="0c6021fd-aa29-4d75-a316-4660eafebeee" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:32:54.509793 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:54.509711 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" podUID="0c6021fd-aa29-4d75-a316-4660eafebeee" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.24:8080: connect: connection refused" Apr 16 18:32:54.510400 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:32:54.510266 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" podUID="0c6021fd-aa29-4d75-a316-4660eafebeee" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:33:04.509510 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:04.509461 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" podUID="0c6021fd-aa29-4d75-a316-4660eafebeee" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.24:8080: connect: connection refused" Apr 16 18:33:04.509977 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:04.509858 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" podUID="0c6021fd-aa29-4d75-a316-4660eafebeee" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:33:14.509952 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:14.509901 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" podUID="0c6021fd-aa29-4d75-a316-4660eafebeee" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.24:8080: connect: connection refused" Apr 16 18:33:14.510465 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:14.510313 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" podUID="0c6021fd-aa29-4d75-a316-4660eafebeee" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:33:24.509007 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:24.508961 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" podUID="0c6021fd-aa29-4d75-a316-4660eafebeee" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.24:8080: connect: connection refused" Apr 16 18:33:24.509579 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:24.509488 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" podUID="0c6021fd-aa29-4d75-a316-4660eafebeee" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:33:34.510378 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:34.510343 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" Apr 16 18:33:34.510865 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:34.510400 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" Apr 16 18:33:43.281240 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:43.281193 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf"] Apr 16 18:33:43.281743 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:43.281494 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" podUID="0c6021fd-aa29-4d75-a316-4660eafebeee" containerName="kserve-container" containerID="cri-o://5e2b8ee9c6f30d019d332837df58caf7f8babb98a9b0dac2d527c4378cfbd0c6" gracePeriod=30 Apr 16 18:33:43.281743 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:43.281538 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" podUID="0c6021fd-aa29-4d75-a316-4660eafebeee" containerName="agent" containerID="cri-o://5c9a11f7aeb47f5af28828429da553cdc990ae706832d8656c7c4cb41f277ef9" gracePeriod=30 Apr 16 18:33:43.318621 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:43.318586 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75"] Apr 16 18:33:43.318940 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:43.318918 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7c2bcbda-2193-421c-991f-44c08928d07a" containerName="model-chainer-raw-hpa-32270" Apr 16 18:33:43.318940 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:43.318938 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c2bcbda-2193-421c-991f-44c08928d07a" containerName="model-chainer-raw-hpa-32270" Apr 16 18:33:43.319052 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:43.319035 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="7c2bcbda-2193-421c-991f-44c08928d07a" containerName="model-chainer-raw-hpa-32270" Apr 16 18:33:43.321676 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:43.321661 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75" Apr 16 18:33:43.330350 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:43.330329 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75"] Apr 16 18:33:43.386334 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:43.386293 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/a4fb2350-d34a-4a61-8d24-968d84706be8-kserve-provision-location\") pod \"isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75\" (UID: \"a4fb2350-d34a-4a61-8d24-968d84706be8\") " pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75" Apr 16 18:33:43.486910 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:43.486878 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/a4fb2350-d34a-4a61-8d24-968d84706be8-kserve-provision-location\") pod \"isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75\" (UID: \"a4fb2350-d34a-4a61-8d24-968d84706be8\") " pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75" Apr 16 18:33:43.487293 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:43.487273 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/a4fb2350-d34a-4a61-8d24-968d84706be8-kserve-provision-location\") pod \"isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75\" (UID: \"a4fb2350-d34a-4a61-8d24-968d84706be8\") " pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75" Apr 16 18:33:43.632527 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:43.632490 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75" Apr 16 18:33:43.748320 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:43.748283 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75"] Apr 16 18:33:43.752767 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:33:43.752735 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4fb2350_d34a_4a61_8d24_968d84706be8.slice/crio-124c1a615eab2d058bc67b0b52ffbc670f7a1a9701edcb057832732709fdfe8c WatchSource:0}: Error finding container 124c1a615eab2d058bc67b0b52ffbc670f7a1a9701edcb057832732709fdfe8c: Status 404 returned error can't find the container with id 124c1a615eab2d058bc67b0b52ffbc670f7a1a9701edcb057832732709fdfe8c Apr 16 18:33:44.509614 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:44.509568 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" podUID="0c6021fd-aa29-4d75-a316-4660eafebeee" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.24:8080: connect: connection refused" Apr 16 18:33:44.510057 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:44.509938 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" podUID="0c6021fd-aa29-4d75-a316-4660eafebeee" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:33:44.728440 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:44.728410 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75" event={"ID":"a4fb2350-d34a-4a61-8d24-968d84706be8","Type":"ContainerStarted","Data":"5ddff54a1327c63e1fb035a5df7fcb7ef8d9ddb02038948a67ac2d488f6ea860"} Apr 16 18:33:44.728613 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:44.728448 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75" event={"ID":"a4fb2350-d34a-4a61-8d24-968d84706be8","Type":"ContainerStarted","Data":"124c1a615eab2d058bc67b0b52ffbc670f7a1a9701edcb057832732709fdfe8c"} Apr 16 18:33:47.738754 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:47.738671 2570 generic.go:358] "Generic (PLEG): container finished" podID="a4fb2350-d34a-4a61-8d24-968d84706be8" containerID="5ddff54a1327c63e1fb035a5df7fcb7ef8d9ddb02038948a67ac2d488f6ea860" exitCode=0 Apr 16 18:33:47.739203 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:47.738745 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75" event={"ID":"a4fb2350-d34a-4a61-8d24-968d84706be8","Type":"ContainerDied","Data":"5ddff54a1327c63e1fb035a5df7fcb7ef8d9ddb02038948a67ac2d488f6ea860"} Apr 16 18:33:47.740681 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:47.740658 2570 generic.go:358] "Generic (PLEG): container finished" podID="0c6021fd-aa29-4d75-a316-4660eafebeee" containerID="5e2b8ee9c6f30d019d332837df58caf7f8babb98a9b0dac2d527c4378cfbd0c6" exitCode=0 Apr 16 18:33:47.740887 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:47.740699 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" event={"ID":"0c6021fd-aa29-4d75-a316-4660eafebeee","Type":"ContainerDied","Data":"5e2b8ee9c6f30d019d332837df58caf7f8babb98a9b0dac2d527c4378cfbd0c6"} Apr 16 18:33:48.745001 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:48.744968 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75" event={"ID":"a4fb2350-d34a-4a61-8d24-968d84706be8","Type":"ContainerStarted","Data":"aeea397049f6d1d5ecc52de5c0768fd1a6488a8925f9e2045350b6b0fb56b5e5"} Apr 16 18:33:48.745466 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:48.745325 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75" Apr 16 18:33:48.746478 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:48.746450 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75" podUID="a4fb2350-d34a-4a61-8d24-968d84706be8" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.25:8080: connect: connection refused" Apr 16 18:33:48.762907 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:48.762863 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75" podStartSLOduration=5.762848778 podStartE2EDuration="5.762848778s" podCreationTimestamp="2026-04-16 18:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:33:48.762071859 +0000 UTC m=+1059.567824662" watchObservedRunningTime="2026-04-16 18:33:48.762848778 +0000 UTC m=+1059.568601580" Apr 16 18:33:49.748131 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:49.748093 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75" podUID="a4fb2350-d34a-4a61-8d24-968d84706be8" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.25:8080: connect: connection refused" Apr 16 18:33:54.509221 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:54.509171 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" podUID="0c6021fd-aa29-4d75-a316-4660eafebeee" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.24:8080: connect: connection refused" Apr 16 18:33:54.509691 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:54.509527 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" podUID="0c6021fd-aa29-4d75-a316-4660eafebeee" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:33:59.748622 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:33:59.748526 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75" podUID="a4fb2350-d34a-4a61-8d24-968d84706be8" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.25:8080: connect: connection refused" Apr 16 18:34:04.509908 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:34:04.509860 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" podUID="0c6021fd-aa29-4d75-a316-4660eafebeee" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.24:8080: connect: connection refused" Apr 16 18:34:04.510388 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:34:04.510015 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" Apr 16 18:34:04.510388 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:34:04.510196 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" podUID="0c6021fd-aa29-4d75-a316-4660eafebeee" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:34:04.510388 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:34:04.510317 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" Apr 16 18:34:09.748662 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:34:09.748623 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75" podUID="a4fb2350-d34a-4a61-8d24-968d84706be8" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.25:8080: connect: connection refused" Apr 16 18:34:13.464308 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:34:13.464286 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" Apr 16 18:34:13.608269 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:34:13.608157 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/0c6021fd-aa29-4d75-a316-4660eafebeee-kserve-provision-location\") pod \"0c6021fd-aa29-4d75-a316-4660eafebeee\" (UID: \"0c6021fd-aa29-4d75-a316-4660eafebeee\") " Apr 16 18:34:13.608510 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:34:13.608482 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c6021fd-aa29-4d75-a316-4660eafebeee-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "0c6021fd-aa29-4d75-a316-4660eafebeee" (UID: "0c6021fd-aa29-4d75-a316-4660eafebeee"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 18:34:13.709099 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:34:13.709063 2570 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/0c6021fd-aa29-4d75-a316-4660eafebeee-kserve-provision-location\") on node \"ip-10-0-141-219.ec2.internal\" DevicePath \"\"" Apr 16 18:34:13.812264 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:34:13.812220 2570 generic.go:358] "Generic (PLEG): container finished" podID="0c6021fd-aa29-4d75-a316-4660eafebeee" containerID="5c9a11f7aeb47f5af28828429da553cdc990ae706832d8656c7c4cb41f277ef9" exitCode=137 Apr 16 18:34:13.812421 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:34:13.812304 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" event={"ID":"0c6021fd-aa29-4d75-a316-4660eafebeee","Type":"ContainerDied","Data":"5c9a11f7aeb47f5af28828429da553cdc990ae706832d8656c7c4cb41f277ef9"} Apr 16 18:34:13.812421 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:34:13.812322 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" Apr 16 18:34:13.812421 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:34:13.812340 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf" event={"ID":"0c6021fd-aa29-4d75-a316-4660eafebeee","Type":"ContainerDied","Data":"00fc00de29c66cda4a40d51c6f60fd944cccaaa5e583289ad9c8e3b5f3d63893"} Apr 16 18:34:13.812421 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:34:13.812356 2570 scope.go:117] "RemoveContainer" containerID="5c9a11f7aeb47f5af28828429da553cdc990ae706832d8656c7c4cb41f277ef9" Apr 16 18:34:13.819649 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:34:13.819633 2570 scope.go:117] "RemoveContainer" containerID="5e2b8ee9c6f30d019d332837df58caf7f8babb98a9b0dac2d527c4378cfbd0c6" Apr 16 18:34:13.827822 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:34:13.827796 2570 scope.go:117] "RemoveContainer" containerID="a04add077659df30f16fecd74ccb8196b9b9af2f51e55c919d3b47b5360cb5c4" Apr 16 18:34:13.834116 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:34:13.834102 2570 scope.go:117] "RemoveContainer" containerID="5c9a11f7aeb47f5af28828429da553cdc990ae706832d8656c7c4cb41f277ef9" Apr 16 18:34:13.834370 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:34:13.834353 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c9a11f7aeb47f5af28828429da553cdc990ae706832d8656c7c4cb41f277ef9\": container with ID starting with 5c9a11f7aeb47f5af28828429da553cdc990ae706832d8656c7c4cb41f277ef9 not found: ID does not exist" containerID="5c9a11f7aeb47f5af28828429da553cdc990ae706832d8656c7c4cb41f277ef9" Apr 16 18:34:13.834417 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:34:13.834380 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c9a11f7aeb47f5af28828429da553cdc990ae706832d8656c7c4cb41f277ef9"} err="failed to get container status \"5c9a11f7aeb47f5af28828429da553cdc990ae706832d8656c7c4cb41f277ef9\": rpc error: code = NotFound desc = could not find container \"5c9a11f7aeb47f5af28828429da553cdc990ae706832d8656c7c4cb41f277ef9\": container with ID starting with 5c9a11f7aeb47f5af28828429da553cdc990ae706832d8656c7c4cb41f277ef9 not found: ID does not exist" Apr 16 18:34:13.834417 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:34:13.834398 2570 scope.go:117] "RemoveContainer" containerID="5e2b8ee9c6f30d019d332837df58caf7f8babb98a9b0dac2d527c4378cfbd0c6" Apr 16 18:34:13.834604 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:34:13.834589 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e2b8ee9c6f30d019d332837df58caf7f8babb98a9b0dac2d527c4378cfbd0c6\": container with ID starting with 5e2b8ee9c6f30d019d332837df58caf7f8babb98a9b0dac2d527c4378cfbd0c6 not found: ID does not exist" containerID="5e2b8ee9c6f30d019d332837df58caf7f8babb98a9b0dac2d527c4378cfbd0c6" Apr 16 18:34:13.834657 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:34:13.834609 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e2b8ee9c6f30d019d332837df58caf7f8babb98a9b0dac2d527c4378cfbd0c6"} err="failed to get container status \"5e2b8ee9c6f30d019d332837df58caf7f8babb98a9b0dac2d527c4378cfbd0c6\": rpc error: code = NotFound desc = could not find container \"5e2b8ee9c6f30d019d332837df58caf7f8babb98a9b0dac2d527c4378cfbd0c6\": container with ID starting with 5e2b8ee9c6f30d019d332837df58caf7f8babb98a9b0dac2d527c4378cfbd0c6 not found: ID does not exist" Apr 16 18:34:13.834657 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:34:13.834627 2570 scope.go:117] "RemoveContainer" containerID="a04add077659df30f16fecd74ccb8196b9b9af2f51e55c919d3b47b5360cb5c4" Apr 16 18:34:13.834863 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:34:13.834845 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a04add077659df30f16fecd74ccb8196b9b9af2f51e55c919d3b47b5360cb5c4\": container with ID starting with a04add077659df30f16fecd74ccb8196b9b9af2f51e55c919d3b47b5360cb5c4 not found: ID does not exist" containerID="a04add077659df30f16fecd74ccb8196b9b9af2f51e55c919d3b47b5360cb5c4" Apr 16 18:34:13.834921 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:34:13.834866 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a04add077659df30f16fecd74ccb8196b9b9af2f51e55c919d3b47b5360cb5c4"} err="failed to get container status \"a04add077659df30f16fecd74ccb8196b9b9af2f51e55c919d3b47b5360cb5c4\": rpc error: code = NotFound desc = could not find container \"a04add077659df30f16fecd74ccb8196b9b9af2f51e55c919d3b47b5360cb5c4\": container with ID starting with a04add077659df30f16fecd74ccb8196b9b9af2f51e55c919d3b47b5360cb5c4 not found: ID does not exist" Apr 16 18:34:13.836730 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:34:13.836708 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf"] Apr 16 18:34:13.840499 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:34:13.840482 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-logger-raw-1141a-predictor-77b9c999df-hj9hf"] Apr 16 18:34:15.780766 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:34:15.780735 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c6021fd-aa29-4d75-a316-4660eafebeee" path="/var/lib/kubelet/pods/0c6021fd-aa29-4d75-a316-4660eafebeee/volumes" Apr 16 18:34:19.748986 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:34:19.748942 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75" podUID="a4fb2350-d34a-4a61-8d24-968d84706be8" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.25:8080: connect: connection refused" Apr 16 18:34:29.748546 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:34:29.748494 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75" podUID="a4fb2350-d34a-4a61-8d24-968d84706be8" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.25:8080: connect: connection refused" Apr 16 18:34:39.748641 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:34:39.748590 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75" podUID="a4fb2350-d34a-4a61-8d24-968d84706be8" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.25:8080: connect: connection refused" Apr 16 18:34:49.748559 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:34:49.748514 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75" podUID="a4fb2350-d34a-4a61-8d24-968d84706be8" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.25:8080: connect: connection refused" Apr 16 18:34:56.778079 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:34:56.778027 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75" podUID="a4fb2350-d34a-4a61-8d24-968d84706be8" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.25:8080: connect: connection refused" Apr 16 18:35:06.779005 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:35:06.778963 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75" podUID="a4fb2350-d34a-4a61-8d24-968d84706be8" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.25:8080: connect: connection refused" Apr 16 18:35:16.778129 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:35:16.778089 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75" podUID="a4fb2350-d34a-4a61-8d24-968d84706be8" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.25:8080: connect: connection refused" Apr 16 18:35:26.778884 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:35:26.778837 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75" podUID="a4fb2350-d34a-4a61-8d24-968d84706be8" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.25:8080: connect: connection refused" Apr 16 18:35:36.778593 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:35:36.778497 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75" podUID="a4fb2350-d34a-4a61-8d24-968d84706be8" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.25:8080: connect: connection refused" Apr 16 18:35:46.778108 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:35:46.778056 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75" podUID="a4fb2350-d34a-4a61-8d24-968d84706be8" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.25:8080: connect: connection refused" Apr 16 18:35:56.778571 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:35:56.778521 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75" podUID="a4fb2350-d34a-4a61-8d24-968d84706be8" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.25:8080: connect: connection refused" Apr 16 18:36:06.779436 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:06.779406 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75" Apr 16 18:36:09.721488 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:09.721459 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hls95_9eda7e8d-1d99-41d3-acfb-b6c80829811c/ovn-acl-logging/0.log" Apr 16 18:36:09.724410 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:09.724390 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hls95_9eda7e8d-1d99-41d3-acfb-b6c80829811c/ovn-acl-logging/0.log" Apr 16 18:36:13.501906 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:13.501868 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75"] Apr 16 18:36:13.502301 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:13.502158 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75" podUID="a4fb2350-d34a-4a61-8d24-968d84706be8" containerName="kserve-container" containerID="cri-o://aeea397049f6d1d5ecc52de5c0768fd1a6488a8925f9e2045350b6b0fb56b5e5" gracePeriod=30 Apr 16 18:36:13.600263 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:13.600212 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-primary-1b6d1a-predictor-6ff5998d79-db6nx"] Apr 16 18:36:13.600484 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:13.600473 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0c6021fd-aa29-4d75-a316-4660eafebeee" containerName="storage-initializer" Apr 16 18:36:13.600527 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:13.600485 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c6021fd-aa29-4d75-a316-4660eafebeee" containerName="storage-initializer" Apr 16 18:36:13.600527 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:13.600499 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0c6021fd-aa29-4d75-a316-4660eafebeee" containerName="kserve-container" Apr 16 18:36:13.600527 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:13.600505 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c6021fd-aa29-4d75-a316-4660eafebeee" containerName="kserve-container" Apr 16 18:36:13.600527 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:13.600513 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0c6021fd-aa29-4d75-a316-4660eafebeee" containerName="agent" Apr 16 18:36:13.600527 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:13.600520 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c6021fd-aa29-4d75-a316-4660eafebeee" containerName="agent" Apr 16 18:36:13.600683 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:13.600560 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="0c6021fd-aa29-4d75-a316-4660eafebeee" containerName="agent" Apr 16 18:36:13.600683 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:13.600567 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="0c6021fd-aa29-4d75-a316-4660eafebeee" containerName="kserve-container" Apr 16 18:36:13.603492 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:13.603475 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-primary-1b6d1a-predictor-6ff5998d79-db6nx" Apr 16 18:36:13.613398 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:13.613376 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-primary-1b6d1a-predictor-6ff5998d79-db6nx"] Apr 16 18:36:13.672029 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:13.671999 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/ae65c876-32fd-4382-ad92-942c6a9a69e9-kserve-provision-location\") pod \"isvc-primary-1b6d1a-predictor-6ff5998d79-db6nx\" (UID: \"ae65c876-32fd-4382-ad92-942c6a9a69e9\") " pod="kserve-ci-e2e-test/isvc-primary-1b6d1a-predictor-6ff5998d79-db6nx" Apr 16 18:36:13.772936 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:13.772861 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/ae65c876-32fd-4382-ad92-942c6a9a69e9-kserve-provision-location\") pod \"isvc-primary-1b6d1a-predictor-6ff5998d79-db6nx\" (UID: \"ae65c876-32fd-4382-ad92-942c6a9a69e9\") " pod="kserve-ci-e2e-test/isvc-primary-1b6d1a-predictor-6ff5998d79-db6nx" Apr 16 18:36:13.773264 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:13.773222 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/ae65c876-32fd-4382-ad92-942c6a9a69e9-kserve-provision-location\") pod \"isvc-primary-1b6d1a-predictor-6ff5998d79-db6nx\" (UID: \"ae65c876-32fd-4382-ad92-942c6a9a69e9\") " pod="kserve-ci-e2e-test/isvc-primary-1b6d1a-predictor-6ff5998d79-db6nx" Apr 16 18:36:13.914319 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:13.914289 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-primary-1b6d1a-predictor-6ff5998d79-db6nx" Apr 16 18:36:14.027377 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:14.027305 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-primary-1b6d1a-predictor-6ff5998d79-db6nx"] Apr 16 18:36:14.030438 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:36:14.030409 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae65c876_32fd_4382_ad92_942c6a9a69e9.slice/crio-12a24c0172907062e4e5d8d478466bcfd86847e2dc4dbaef0f5b821afbe7b683 WatchSource:0}: Error finding container 12a24c0172907062e4e5d8d478466bcfd86847e2dc4dbaef0f5b821afbe7b683: Status 404 returned error can't find the container with id 12a24c0172907062e4e5d8d478466bcfd86847e2dc4dbaef0f5b821afbe7b683 Apr 16 18:36:14.032848 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:14.032824 2570 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 18:36:14.127716 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:14.127675 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-primary-1b6d1a-predictor-6ff5998d79-db6nx" event={"ID":"ae65c876-32fd-4382-ad92-942c6a9a69e9","Type":"ContainerStarted","Data":"49e9500d81481e3ded0729618c83f3fe144d22823a712a7f4a2618a8b2f290a1"} Apr 16 18:36:14.127862 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:14.127727 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-primary-1b6d1a-predictor-6ff5998d79-db6nx" event={"ID":"ae65c876-32fd-4382-ad92-942c6a9a69e9","Type":"ContainerStarted","Data":"12a24c0172907062e4e5d8d478466bcfd86847e2dc4dbaef0f5b821afbe7b683"} Apr 16 18:36:16.778625 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:16.778585 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75" podUID="a4fb2350-d34a-4a61-8d24-968d84706be8" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.25:8080: connect: connection refused" Apr 16 18:36:18.140509 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:18.140426 2570 generic.go:358] "Generic (PLEG): container finished" podID="ae65c876-32fd-4382-ad92-942c6a9a69e9" containerID="49e9500d81481e3ded0729618c83f3fe144d22823a712a7f4a2618a8b2f290a1" exitCode=0 Apr 16 18:36:18.140941 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:18.140502 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-primary-1b6d1a-predictor-6ff5998d79-db6nx" event={"ID":"ae65c876-32fd-4382-ad92-942c6a9a69e9","Type":"ContainerDied","Data":"49e9500d81481e3ded0729618c83f3fe144d22823a712a7f4a2618a8b2f290a1"} Apr 16 18:36:19.144773 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:19.144731 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-primary-1b6d1a-predictor-6ff5998d79-db6nx" event={"ID":"ae65c876-32fd-4382-ad92-942c6a9a69e9","Type":"ContainerStarted","Data":"486d948827afc1a7660636e3ebd1eb34a787726ddc48c4d04c1c1105bb7402bb"} Apr 16 18:36:19.145160 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:19.145055 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-primary-1b6d1a-predictor-6ff5998d79-db6nx" Apr 16 18:36:19.146289 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:19.146260 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-primary-1b6d1a-predictor-6ff5998d79-db6nx" podUID="ae65c876-32fd-4382-ad92-942c6a9a69e9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.26:8080: connect: connection refused" Apr 16 18:36:19.165801 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:19.165722 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-primary-1b6d1a-predictor-6ff5998d79-db6nx" podStartSLOduration=6.165703578 podStartE2EDuration="6.165703578s" podCreationTimestamp="2026-04-16 18:36:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:36:19.163126474 +0000 UTC m=+1209.968879278" watchObservedRunningTime="2026-04-16 18:36:19.165703578 +0000 UTC m=+1209.971456371" Apr 16 18:36:20.148540 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:20.148496 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-primary-1b6d1a-predictor-6ff5998d79-db6nx" podUID="ae65c876-32fd-4382-ad92-942c6a9a69e9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.26:8080: connect: connection refused" Apr 16 18:36:22.530881 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:22.530859 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75" Apr 16 18:36:22.629199 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:22.629165 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/a4fb2350-d34a-4a61-8d24-968d84706be8-kserve-provision-location\") pod \"a4fb2350-d34a-4a61-8d24-968d84706be8\" (UID: \"a4fb2350-d34a-4a61-8d24-968d84706be8\") " Apr 16 18:36:22.629511 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:22.629486 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4fb2350-d34a-4a61-8d24-968d84706be8-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "a4fb2350-d34a-4a61-8d24-968d84706be8" (UID: "a4fb2350-d34a-4a61-8d24-968d84706be8"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 18:36:22.729700 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:22.729662 2570 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/a4fb2350-d34a-4a61-8d24-968d84706be8-kserve-provision-location\") on node \"ip-10-0-141-219.ec2.internal\" DevicePath \"\"" Apr 16 18:36:23.161791 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:23.161755 2570 generic.go:358] "Generic (PLEG): container finished" podID="a4fb2350-d34a-4a61-8d24-968d84706be8" containerID="aeea397049f6d1d5ecc52de5c0768fd1a6488a8925f9e2045350b6b0fb56b5e5" exitCode=0 Apr 16 18:36:23.161967 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:23.161808 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75" event={"ID":"a4fb2350-d34a-4a61-8d24-968d84706be8","Type":"ContainerDied","Data":"aeea397049f6d1d5ecc52de5c0768fd1a6488a8925f9e2045350b6b0fb56b5e5"} Apr 16 18:36:23.161967 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:23.161834 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75" event={"ID":"a4fb2350-d34a-4a61-8d24-968d84706be8","Type":"ContainerDied","Data":"124c1a615eab2d058bc67b0b52ffbc670f7a1a9701edcb057832732709fdfe8c"} Apr 16 18:36:23.161967 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:23.161848 2570 scope.go:117] "RemoveContainer" containerID="aeea397049f6d1d5ecc52de5c0768fd1a6488a8925f9e2045350b6b0fb56b5e5" Apr 16 18:36:23.161967 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:23.161857 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75" Apr 16 18:36:23.169540 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:23.169528 2570 scope.go:117] "RemoveContainer" containerID="5ddff54a1327c63e1fb035a5df7fcb7ef8d9ddb02038948a67ac2d488f6ea860" Apr 16 18:36:23.176573 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:23.176557 2570 scope.go:117] "RemoveContainer" containerID="aeea397049f6d1d5ecc52de5c0768fd1a6488a8925f9e2045350b6b0fb56b5e5" Apr 16 18:36:23.176810 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:36:23.176790 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aeea397049f6d1d5ecc52de5c0768fd1a6488a8925f9e2045350b6b0fb56b5e5\": container with ID starting with aeea397049f6d1d5ecc52de5c0768fd1a6488a8925f9e2045350b6b0fb56b5e5 not found: ID does not exist" containerID="aeea397049f6d1d5ecc52de5c0768fd1a6488a8925f9e2045350b6b0fb56b5e5" Apr 16 18:36:23.176853 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:23.176819 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aeea397049f6d1d5ecc52de5c0768fd1a6488a8925f9e2045350b6b0fb56b5e5"} err="failed to get container status \"aeea397049f6d1d5ecc52de5c0768fd1a6488a8925f9e2045350b6b0fb56b5e5\": rpc error: code = NotFound desc = could not find container \"aeea397049f6d1d5ecc52de5c0768fd1a6488a8925f9e2045350b6b0fb56b5e5\": container with ID starting with aeea397049f6d1d5ecc52de5c0768fd1a6488a8925f9e2045350b6b0fb56b5e5 not found: ID does not exist" Apr 16 18:36:23.176853 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:23.176836 2570 scope.go:117] "RemoveContainer" containerID="5ddff54a1327c63e1fb035a5df7fcb7ef8d9ddb02038948a67ac2d488f6ea860" Apr 16 18:36:23.177066 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:36:23.177047 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ddff54a1327c63e1fb035a5df7fcb7ef8d9ddb02038948a67ac2d488f6ea860\": container with ID starting with 5ddff54a1327c63e1fb035a5df7fcb7ef8d9ddb02038948a67ac2d488f6ea860 not found: ID does not exist" containerID="5ddff54a1327c63e1fb035a5df7fcb7ef8d9ddb02038948a67ac2d488f6ea860" Apr 16 18:36:23.177115 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:23.177071 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ddff54a1327c63e1fb035a5df7fcb7ef8d9ddb02038948a67ac2d488f6ea860"} err="failed to get container status \"5ddff54a1327c63e1fb035a5df7fcb7ef8d9ddb02038948a67ac2d488f6ea860\": rpc error: code = NotFound desc = could not find container \"5ddff54a1327c63e1fb035a5df7fcb7ef8d9ddb02038948a67ac2d488f6ea860\": container with ID starting with 5ddff54a1327c63e1fb035a5df7fcb7ef8d9ddb02038948a67ac2d488f6ea860 not found: ID does not exist" Apr 16 18:36:23.182350 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:23.182321 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75"] Apr 16 18:36:23.185000 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:23.184982 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-scale-raw-5b9cb-predictor-766bb68968-6dr75"] Apr 16 18:36:23.780997 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:23.780962 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4fb2350-d34a-4a61-8d24-968d84706be8" path="/var/lib/kubelet/pods/a4fb2350-d34a-4a61-8d24-968d84706be8/volumes" Apr 16 18:36:30.148884 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:30.148831 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-primary-1b6d1a-predictor-6ff5998d79-db6nx" podUID="ae65c876-32fd-4382-ad92-942c6a9a69e9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.26:8080: connect: connection refused" Apr 16 18:36:40.149312 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:40.149264 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-primary-1b6d1a-predictor-6ff5998d79-db6nx" podUID="ae65c876-32fd-4382-ad92-942c6a9a69e9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.26:8080: connect: connection refused" Apr 16 18:36:50.148705 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:36:50.148662 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-primary-1b6d1a-predictor-6ff5998d79-db6nx" podUID="ae65c876-32fd-4382-ad92-942c6a9a69e9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.26:8080: connect: connection refused" Apr 16 18:37:00.148695 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:00.148597 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-primary-1b6d1a-predictor-6ff5998d79-db6nx" podUID="ae65c876-32fd-4382-ad92-942c6a9a69e9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.26:8080: connect: connection refused" Apr 16 18:37:10.148961 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:10.148921 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-primary-1b6d1a-predictor-6ff5998d79-db6nx" podUID="ae65c876-32fd-4382-ad92-942c6a9a69e9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.26:8080: connect: connection refused" Apr 16 18:37:20.148929 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:20.148876 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-primary-1b6d1a-predictor-6ff5998d79-db6nx" podUID="ae65c876-32fd-4382-ad92-942c6a9a69e9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.26:8080: connect: connection refused" Apr 16 18:37:30.148963 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:30.148928 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-primary-1b6d1a-predictor-6ff5998d79-db6nx" Apr 16 18:37:33.801071 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:33.801038 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m"] Apr 16 18:37:33.801477 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:33.801319 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a4fb2350-d34a-4a61-8d24-968d84706be8" containerName="kserve-container" Apr 16 18:37:33.801477 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:33.801331 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4fb2350-d34a-4a61-8d24-968d84706be8" containerName="kserve-container" Apr 16 18:37:33.801477 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:33.801346 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a4fb2350-d34a-4a61-8d24-968d84706be8" containerName="storage-initializer" Apr 16 18:37:33.801477 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:33.801352 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4fb2350-d34a-4a61-8d24-968d84706be8" containerName="storage-initializer" Apr 16 18:37:33.801477 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:33.801393 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="a4fb2350-d34a-4a61-8d24-968d84706be8" containerName="kserve-container" Apr 16 18:37:33.804511 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:33.804494 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m" Apr 16 18:37:33.806920 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:33.806894 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"invalid-s3-secret-1b6d1a\"" Apr 16 18:37:33.807783 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:33.807765 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"invalid-s3-sa-1b6d1a-dockercfg-2pnf2\"" Apr 16 18:37:33.807878 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:33.807767 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"odh-kserve-custom-ca-bundle\"" Apr 16 18:37:33.815247 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:33.815201 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m"] Apr 16 18:37:33.936719 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:33.936684 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/40631665-a2ce-4691-8206-5167175ba62d-cabundle-cert\") pod \"isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m\" (UID: \"40631665-a2ce-4691-8206-5167175ba62d\") " pod="kserve-ci-e2e-test/isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m" Apr 16 18:37:33.936894 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:33.936792 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/40631665-a2ce-4691-8206-5167175ba62d-kserve-provision-location\") pod \"isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m\" (UID: \"40631665-a2ce-4691-8206-5167175ba62d\") " pod="kserve-ci-e2e-test/isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m" Apr 16 18:37:34.037582 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:34.037544 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/40631665-a2ce-4691-8206-5167175ba62d-kserve-provision-location\") pod \"isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m\" (UID: \"40631665-a2ce-4691-8206-5167175ba62d\") " pod="kserve-ci-e2e-test/isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m" Apr 16 18:37:34.037792 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:34.037605 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/40631665-a2ce-4691-8206-5167175ba62d-cabundle-cert\") pod \"isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m\" (UID: \"40631665-a2ce-4691-8206-5167175ba62d\") " pod="kserve-ci-e2e-test/isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m" Apr 16 18:37:34.037930 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:34.037910 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/40631665-a2ce-4691-8206-5167175ba62d-kserve-provision-location\") pod \"isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m\" (UID: \"40631665-a2ce-4691-8206-5167175ba62d\") " pod="kserve-ci-e2e-test/isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m" Apr 16 18:37:34.038151 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:34.038133 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/40631665-a2ce-4691-8206-5167175ba62d-cabundle-cert\") pod \"isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m\" (UID: \"40631665-a2ce-4691-8206-5167175ba62d\") " pod="kserve-ci-e2e-test/isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m" Apr 16 18:37:34.119682 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:34.119653 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m" Apr 16 18:37:34.233080 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:34.232992 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m"] Apr 16 18:37:34.235833 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:37:34.235798 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40631665_a2ce_4691_8206_5167175ba62d.slice/crio-411958d7b8599de5325bb3fadbeee03440f04a63f1170e330b8bdd80161dde74 WatchSource:0}: Error finding container 411958d7b8599de5325bb3fadbeee03440f04a63f1170e330b8bdd80161dde74: Status 404 returned error can't find the container with id 411958d7b8599de5325bb3fadbeee03440f04a63f1170e330b8bdd80161dde74 Apr 16 18:37:34.349467 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:34.349432 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m" event={"ID":"40631665-a2ce-4691-8206-5167175ba62d","Type":"ContainerStarted","Data":"0b1d20dc27138dd2d7d991b11e100cea4ceb791a3c93f719e9e24c21096195ca"} Apr 16 18:37:34.349467 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:34.349469 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m" event={"ID":"40631665-a2ce-4691-8206-5167175ba62d","Type":"ContainerStarted","Data":"411958d7b8599de5325bb3fadbeee03440f04a63f1170e330b8bdd80161dde74"} Apr 16 18:37:39.364546 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:39.364510 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m_40631665-a2ce-4691-8206-5167175ba62d/storage-initializer/0.log" Apr 16 18:37:39.365003 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:39.364554 2570 generic.go:358] "Generic (PLEG): container finished" podID="40631665-a2ce-4691-8206-5167175ba62d" containerID="0b1d20dc27138dd2d7d991b11e100cea4ceb791a3c93f719e9e24c21096195ca" exitCode=1 Apr 16 18:37:39.365003 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:39.364629 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m" event={"ID":"40631665-a2ce-4691-8206-5167175ba62d","Type":"ContainerDied","Data":"0b1d20dc27138dd2d7d991b11e100cea4ceb791a3c93f719e9e24c21096195ca"} Apr 16 18:37:40.368841 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:40.368807 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m_40631665-a2ce-4691-8206-5167175ba62d/storage-initializer/0.log" Apr 16 18:37:40.369346 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:40.368902 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m" event={"ID":"40631665-a2ce-4691-8206-5167175ba62d","Type":"ContainerStarted","Data":"fbb67ab2b24f86f6dee41eb88a613de476392feed864cc971efdc443ee19745b"} Apr 16 18:37:45.385090 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:45.385060 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m_40631665-a2ce-4691-8206-5167175ba62d/storage-initializer/1.log" Apr 16 18:37:45.385500 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:45.385460 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m_40631665-a2ce-4691-8206-5167175ba62d/storage-initializer/0.log" Apr 16 18:37:45.385500 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:45.385494 2570 generic.go:358] "Generic (PLEG): container finished" podID="40631665-a2ce-4691-8206-5167175ba62d" containerID="fbb67ab2b24f86f6dee41eb88a613de476392feed864cc971efdc443ee19745b" exitCode=1 Apr 16 18:37:45.385595 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:45.385571 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m" event={"ID":"40631665-a2ce-4691-8206-5167175ba62d","Type":"ContainerDied","Data":"fbb67ab2b24f86f6dee41eb88a613de476392feed864cc971efdc443ee19745b"} Apr 16 18:37:45.385664 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:45.385619 2570 scope.go:117] "RemoveContainer" containerID="0b1d20dc27138dd2d7d991b11e100cea4ceb791a3c93f719e9e24c21096195ca" Apr 16 18:37:45.385984 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:45.385965 2570 scope.go:117] "RemoveContainer" containerID="0b1d20dc27138dd2d7d991b11e100cea4ceb791a3c93f719e9e24c21096195ca" Apr 16 18:37:45.395645 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:37:45.395617 2570 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_storage-initializer_isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m_kserve-ci-e2e-test_40631665-a2ce-4691-8206-5167175ba62d_0 in pod sandbox 411958d7b8599de5325bb3fadbeee03440f04a63f1170e330b8bdd80161dde74 from index: no such id: '0b1d20dc27138dd2d7d991b11e100cea4ceb791a3c93f719e9e24c21096195ca'" containerID="0b1d20dc27138dd2d7d991b11e100cea4ceb791a3c93f719e9e24c21096195ca" Apr 16 18:37:45.395716 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:37:45.395666 2570 kuberuntime_container.go:951] "Unhandled Error" err="failed to remove pod init container \"storage-initializer\": rpc error: code = Unknown desc = failed to delete container k8s_storage-initializer_isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m_kserve-ci-e2e-test_40631665-a2ce-4691-8206-5167175ba62d_0 in pod sandbox 411958d7b8599de5325bb3fadbeee03440f04a63f1170e330b8bdd80161dde74 from index: no such id: '0b1d20dc27138dd2d7d991b11e100cea4ceb791a3c93f719e9e24c21096195ca'; Skipping pod \"isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m_kserve-ci-e2e-test(40631665-a2ce-4691-8206-5167175ba62d)\"" logger="UnhandledError" Apr 16 18:37:45.396992 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:37:45.396973 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-initializer\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-initializer pod=isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m_kserve-ci-e2e-test(40631665-a2ce-4691-8206-5167175ba62d)\"" pod="kserve-ci-e2e-test/isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m" podUID="40631665-a2ce-4691-8206-5167175ba62d" Apr 16 18:37:46.389362 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:46.389334 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m_40631665-a2ce-4691-8206-5167175ba62d/storage-initializer/1.log" Apr 16 18:37:49.812073 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:49.812039 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m"] Apr 16 18:37:49.868844 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:49.868814 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-primary-1b6d1a-predictor-6ff5998d79-db6nx"] Apr 16 18:37:49.869125 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:49.869096 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-primary-1b6d1a-predictor-6ff5998d79-db6nx" podUID="ae65c876-32fd-4382-ad92-942c6a9a69e9" containerName="kserve-container" containerID="cri-o://486d948827afc1a7660636e3ebd1eb34a787726ddc48c4d04c1c1105bb7402bb" gracePeriod=30 Apr 16 18:37:49.948067 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:49.948044 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m_40631665-a2ce-4691-8206-5167175ba62d/storage-initializer/1.log" Apr 16 18:37:49.948185 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:49.948106 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m" Apr 16 18:37:49.949938 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:49.949913 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-init-fail-e032fb-predictor-55b46c576-5ln58"] Apr 16 18:37:49.950170 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:49.950159 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="40631665-a2ce-4691-8206-5167175ba62d" containerName="storage-initializer" Apr 16 18:37:49.950213 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:49.950172 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="40631665-a2ce-4691-8206-5167175ba62d" containerName="storage-initializer" Apr 16 18:37:49.950213 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:49.950181 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="40631665-a2ce-4691-8206-5167175ba62d" containerName="storage-initializer" Apr 16 18:37:49.950213 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:49.950186 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="40631665-a2ce-4691-8206-5167175ba62d" containerName="storage-initializer" Apr 16 18:37:49.950326 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:49.950275 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="40631665-a2ce-4691-8206-5167175ba62d" containerName="storage-initializer" Apr 16 18:37:49.950326 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:49.950288 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="40631665-a2ce-4691-8206-5167175ba62d" containerName="storage-initializer" Apr 16 18:37:49.954128 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:49.954110 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-init-fail-e032fb-predictor-55b46c576-5ln58" Apr 16 18:37:49.956279 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:49.956259 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"fail-s3-secret-e032fb\"" Apr 16 18:37:49.956366 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:49.956259 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"fail-s3-sa-e032fb-dockercfg-88rjx\"" Apr 16 18:37:49.963998 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:49.963975 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-init-fail-e032fb-predictor-55b46c576-5ln58"] Apr 16 18:37:50.050750 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:50.050719 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/40631665-a2ce-4691-8206-5167175ba62d-kserve-provision-location\") pod \"40631665-a2ce-4691-8206-5167175ba62d\" (UID: \"40631665-a2ce-4691-8206-5167175ba62d\") " Apr 16 18:37:50.050912 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:50.050788 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/40631665-a2ce-4691-8206-5167175ba62d-cabundle-cert\") pod \"40631665-a2ce-4691-8206-5167175ba62d\" (UID: \"40631665-a2ce-4691-8206-5167175ba62d\") " Apr 16 18:37:50.050912 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:50.050888 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/d3fcb851-f53a-40ba-a13c-88baf419533d-cabundle-cert\") pod \"isvc-init-fail-e032fb-predictor-55b46c576-5ln58\" (UID: \"d3fcb851-f53a-40ba-a13c-88baf419533d\") " pod="kserve-ci-e2e-test/isvc-init-fail-e032fb-predictor-55b46c576-5ln58" Apr 16 18:37:50.050994 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:50.050917 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/d3fcb851-f53a-40ba-a13c-88baf419533d-kserve-provision-location\") pod \"isvc-init-fail-e032fb-predictor-55b46c576-5ln58\" (UID: \"d3fcb851-f53a-40ba-a13c-88baf419533d\") " pod="kserve-ci-e2e-test/isvc-init-fail-e032fb-predictor-55b46c576-5ln58" Apr 16 18:37:50.051057 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:50.051030 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40631665-a2ce-4691-8206-5167175ba62d-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "40631665-a2ce-4691-8206-5167175ba62d" (UID: "40631665-a2ce-4691-8206-5167175ba62d"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 18:37:50.051115 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:50.051101 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40631665-a2ce-4691-8206-5167175ba62d-cabundle-cert" (OuterVolumeSpecName: "cabundle-cert") pod "40631665-a2ce-4691-8206-5167175ba62d" (UID: "40631665-a2ce-4691-8206-5167175ba62d"). InnerVolumeSpecName "cabundle-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 18:37:50.148817 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:50.148783 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-primary-1b6d1a-predictor-6ff5998d79-db6nx" podUID="ae65c876-32fd-4382-ad92-942c6a9a69e9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.26:8080: connect: connection refused" Apr 16 18:37:50.151518 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:50.151494 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/d3fcb851-f53a-40ba-a13c-88baf419533d-cabundle-cert\") pod \"isvc-init-fail-e032fb-predictor-55b46c576-5ln58\" (UID: \"d3fcb851-f53a-40ba-a13c-88baf419533d\") " pod="kserve-ci-e2e-test/isvc-init-fail-e032fb-predictor-55b46c576-5ln58" Apr 16 18:37:50.151636 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:50.151534 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/d3fcb851-f53a-40ba-a13c-88baf419533d-kserve-provision-location\") pod \"isvc-init-fail-e032fb-predictor-55b46c576-5ln58\" (UID: \"d3fcb851-f53a-40ba-a13c-88baf419533d\") " pod="kserve-ci-e2e-test/isvc-init-fail-e032fb-predictor-55b46c576-5ln58" Apr 16 18:37:50.151636 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:50.151567 2570 reconciler_common.go:299] "Volume detached for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/40631665-a2ce-4691-8206-5167175ba62d-cabundle-cert\") on node \"ip-10-0-141-219.ec2.internal\" DevicePath \"\"" Apr 16 18:37:50.151636 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:50.151578 2570 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/40631665-a2ce-4691-8206-5167175ba62d-kserve-provision-location\") on node \"ip-10-0-141-219.ec2.internal\" DevicePath \"\"" Apr 16 18:37:50.151863 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:50.151848 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/d3fcb851-f53a-40ba-a13c-88baf419533d-kserve-provision-location\") pod \"isvc-init-fail-e032fb-predictor-55b46c576-5ln58\" (UID: \"d3fcb851-f53a-40ba-a13c-88baf419533d\") " pod="kserve-ci-e2e-test/isvc-init-fail-e032fb-predictor-55b46c576-5ln58" Apr 16 18:37:50.152096 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:50.152076 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/d3fcb851-f53a-40ba-a13c-88baf419533d-cabundle-cert\") pod \"isvc-init-fail-e032fb-predictor-55b46c576-5ln58\" (UID: \"d3fcb851-f53a-40ba-a13c-88baf419533d\") " pod="kserve-ci-e2e-test/isvc-init-fail-e032fb-predictor-55b46c576-5ln58" Apr 16 18:37:50.263563 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:50.263522 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-init-fail-e032fb-predictor-55b46c576-5ln58" Apr 16 18:37:50.377212 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:50.377181 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-init-fail-e032fb-predictor-55b46c576-5ln58"] Apr 16 18:37:50.380619 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:37:50.380590 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3fcb851_f53a_40ba_a13c_88baf419533d.slice/crio-2ac4daed76f600a04123564718f1d7e9bf8dd317ca6b73f0c2b5c3821c0a3142 WatchSource:0}: Error finding container 2ac4daed76f600a04123564718f1d7e9bf8dd317ca6b73f0c2b5c3821c0a3142: Status 404 returned error can't find the container with id 2ac4daed76f600a04123564718f1d7e9bf8dd317ca6b73f0c2b5c3821c0a3142 Apr 16 18:37:50.405138 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:50.405086 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m_40631665-a2ce-4691-8206-5167175ba62d/storage-initializer/1.log" Apr 16 18:37:50.405254 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:50.405184 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m" event={"ID":"40631665-a2ce-4691-8206-5167175ba62d","Type":"ContainerDied","Data":"411958d7b8599de5325bb3fadbeee03440f04a63f1170e330b8bdd80161dde74"} Apr 16 18:37:50.405254 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:50.405213 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m" Apr 16 18:37:50.405254 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:50.405223 2570 scope.go:117] "RemoveContainer" containerID="fbb67ab2b24f86f6dee41eb88a613de476392feed864cc971efdc443ee19745b" Apr 16 18:37:50.406386 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:50.406358 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-init-fail-e032fb-predictor-55b46c576-5ln58" event={"ID":"d3fcb851-f53a-40ba-a13c-88baf419533d","Type":"ContainerStarted","Data":"2ac4daed76f600a04123564718f1d7e9bf8dd317ca6b73f0c2b5c3821c0a3142"} Apr 16 18:37:50.443354 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:50.443326 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m"] Apr 16 18:37:50.449279 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:50.449255 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-secondary-1b6d1a-predictor-5cd95c94db-85c2m"] Apr 16 18:37:51.411629 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:51.411597 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-init-fail-e032fb-predictor-55b46c576-5ln58" event={"ID":"d3fcb851-f53a-40ba-a13c-88baf419533d","Type":"ContainerStarted","Data":"02e2c16a79f2a1c62a0f271a5163aad6d1e7f1a4d520ff01c255c331da34d621"} Apr 16 18:37:51.781286 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:51.781184 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40631665-a2ce-4691-8206-5167175ba62d" path="/var/lib/kubelet/pods/40631665-a2ce-4691-8206-5167175ba62d/volumes" Apr 16 18:37:54.299463 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:54.299441 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-primary-1b6d1a-predictor-6ff5998d79-db6nx" Apr 16 18:37:54.387403 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:54.387372 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/ae65c876-32fd-4382-ad92-942c6a9a69e9-kserve-provision-location\") pod \"ae65c876-32fd-4382-ad92-942c6a9a69e9\" (UID: \"ae65c876-32fd-4382-ad92-942c6a9a69e9\") " Apr 16 18:37:54.387674 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:54.387655 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae65c876-32fd-4382-ad92-942c6a9a69e9-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "ae65c876-32fd-4382-ad92-942c6a9a69e9" (UID: "ae65c876-32fd-4382-ad92-942c6a9a69e9"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 18:37:54.419334 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:54.419252 2570 generic.go:358] "Generic (PLEG): container finished" podID="ae65c876-32fd-4382-ad92-942c6a9a69e9" containerID="486d948827afc1a7660636e3ebd1eb34a787726ddc48c4d04c1c1105bb7402bb" exitCode=0 Apr 16 18:37:54.419334 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:54.419310 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-primary-1b6d1a-predictor-6ff5998d79-db6nx" Apr 16 18:37:54.419500 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:54.419310 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-primary-1b6d1a-predictor-6ff5998d79-db6nx" event={"ID":"ae65c876-32fd-4382-ad92-942c6a9a69e9","Type":"ContainerDied","Data":"486d948827afc1a7660636e3ebd1eb34a787726ddc48c4d04c1c1105bb7402bb"} Apr 16 18:37:54.419500 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:54.419409 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-primary-1b6d1a-predictor-6ff5998d79-db6nx" event={"ID":"ae65c876-32fd-4382-ad92-942c6a9a69e9","Type":"ContainerDied","Data":"12a24c0172907062e4e5d8d478466bcfd86847e2dc4dbaef0f5b821afbe7b683"} Apr 16 18:37:54.419500 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:54.419423 2570 scope.go:117] "RemoveContainer" containerID="486d948827afc1a7660636e3ebd1eb34a787726ddc48c4d04c1c1105bb7402bb" Apr 16 18:37:54.427020 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:54.426995 2570 scope.go:117] "RemoveContainer" containerID="49e9500d81481e3ded0729618c83f3fe144d22823a712a7f4a2618a8b2f290a1" Apr 16 18:37:54.433963 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:54.433946 2570 scope.go:117] "RemoveContainer" containerID="486d948827afc1a7660636e3ebd1eb34a787726ddc48c4d04c1c1105bb7402bb" Apr 16 18:37:54.434218 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:37:54.434196 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"486d948827afc1a7660636e3ebd1eb34a787726ddc48c4d04c1c1105bb7402bb\": container with ID starting with 486d948827afc1a7660636e3ebd1eb34a787726ddc48c4d04c1c1105bb7402bb not found: ID does not exist" containerID="486d948827afc1a7660636e3ebd1eb34a787726ddc48c4d04c1c1105bb7402bb" Apr 16 18:37:54.434335 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:54.434245 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"486d948827afc1a7660636e3ebd1eb34a787726ddc48c4d04c1c1105bb7402bb"} err="failed to get container status \"486d948827afc1a7660636e3ebd1eb34a787726ddc48c4d04c1c1105bb7402bb\": rpc error: code = NotFound desc = could not find container \"486d948827afc1a7660636e3ebd1eb34a787726ddc48c4d04c1c1105bb7402bb\": container with ID starting with 486d948827afc1a7660636e3ebd1eb34a787726ddc48c4d04c1c1105bb7402bb not found: ID does not exist" Apr 16 18:37:54.434335 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:54.434266 2570 scope.go:117] "RemoveContainer" containerID="49e9500d81481e3ded0729618c83f3fe144d22823a712a7f4a2618a8b2f290a1" Apr 16 18:37:54.434516 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:37:54.434502 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49e9500d81481e3ded0729618c83f3fe144d22823a712a7f4a2618a8b2f290a1\": container with ID starting with 49e9500d81481e3ded0729618c83f3fe144d22823a712a7f4a2618a8b2f290a1 not found: ID does not exist" containerID="49e9500d81481e3ded0729618c83f3fe144d22823a712a7f4a2618a8b2f290a1" Apr 16 18:37:54.434557 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:54.434519 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49e9500d81481e3ded0729618c83f3fe144d22823a712a7f4a2618a8b2f290a1"} err="failed to get container status \"49e9500d81481e3ded0729618c83f3fe144d22823a712a7f4a2618a8b2f290a1\": rpc error: code = NotFound desc = could not find container \"49e9500d81481e3ded0729618c83f3fe144d22823a712a7f4a2618a8b2f290a1\": container with ID starting with 49e9500d81481e3ded0729618c83f3fe144d22823a712a7f4a2618a8b2f290a1 not found: ID does not exist" Apr 16 18:37:54.438936 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:54.438914 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-primary-1b6d1a-predictor-6ff5998d79-db6nx"] Apr 16 18:37:54.444912 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:54.444884 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-primary-1b6d1a-predictor-6ff5998d79-db6nx"] Apr 16 18:37:54.488511 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:54.488476 2570 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/ae65c876-32fd-4382-ad92-942c6a9a69e9-kserve-provision-location\") on node \"ip-10-0-141-219.ec2.internal\" DevicePath \"\"" Apr 16 18:37:55.424040 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:55.423969 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-init-fail-e032fb-predictor-55b46c576-5ln58_d3fcb851-f53a-40ba-a13c-88baf419533d/storage-initializer/0.log" Apr 16 18:37:55.424040 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:55.424001 2570 generic.go:358] "Generic (PLEG): container finished" podID="d3fcb851-f53a-40ba-a13c-88baf419533d" containerID="02e2c16a79f2a1c62a0f271a5163aad6d1e7f1a4d520ff01c255c331da34d621" exitCode=1 Apr 16 18:37:55.424470 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:55.424044 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-init-fail-e032fb-predictor-55b46c576-5ln58" event={"ID":"d3fcb851-f53a-40ba-a13c-88baf419533d","Type":"ContainerDied","Data":"02e2c16a79f2a1c62a0f271a5163aad6d1e7f1a4d520ff01c255c331da34d621"} Apr 16 18:37:55.781346 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:55.781259 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae65c876-32fd-4382-ad92-942c6a9a69e9" path="/var/lib/kubelet/pods/ae65c876-32fd-4382-ad92-942c6a9a69e9/volumes" Apr 16 18:37:56.428879 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:56.428853 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-init-fail-e032fb-predictor-55b46c576-5ln58_d3fcb851-f53a-40ba-a13c-88baf419533d/storage-initializer/0.log" Apr 16 18:37:56.429283 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:56.428898 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-init-fail-e032fb-predictor-55b46c576-5ln58" event={"ID":"d3fcb851-f53a-40ba-a13c-88baf419533d","Type":"ContainerStarted","Data":"1827a0a271c7d0a0e53afbcb696ee828c98f8fd5f459823862d23e21d151672d"} Apr 16 18:37:59.965238 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:59.965193 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-init-fail-e032fb-predictor-55b46c576-5ln58"] Apr 16 18:37:59.965623 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:37:59.965472 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-init-fail-e032fb-predictor-55b46c576-5ln58" podUID="d3fcb851-f53a-40ba-a13c-88baf419533d" containerName="storage-initializer" containerID="cri-o://1827a0a271c7d0a0e53afbcb696ee828c98f8fd5f459823862d23e21d151672d" gracePeriod=30 Apr 16 18:38:00.209507 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:00.209466 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-6fff8-predictor-577c9b867b-wv48r"] Apr 16 18:38:00.209784 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:00.209769 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ae65c876-32fd-4382-ad92-942c6a9a69e9" containerName="storage-initializer" Apr 16 18:38:00.209784 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:00.209785 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae65c876-32fd-4382-ad92-942c6a9a69e9" containerName="storage-initializer" Apr 16 18:38:00.209865 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:00.209801 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ae65c876-32fd-4382-ad92-942c6a9a69e9" containerName="kserve-container" Apr 16 18:38:00.209865 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:00.209807 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae65c876-32fd-4382-ad92-942c6a9a69e9" containerName="kserve-container" Apr 16 18:38:00.209865 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:00.209860 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="ae65c876-32fd-4382-ad92-942c6a9a69e9" containerName="kserve-container" Apr 16 18:38:00.212840 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:00.212817 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/raw-sklearn-6fff8-predictor-577c9b867b-wv48r" Apr 16 18:38:00.215390 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:00.215331 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"default-dockercfg-842jm\"" Apr 16 18:38:00.222720 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:00.222699 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-6fff8-predictor-577c9b867b-wv48r"] Apr 16 18:38:00.224631 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:00.224604 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/f1857e4f-d3f8-4cbd-a1b3-93d56fa57769-kserve-provision-location\") pod \"raw-sklearn-6fff8-predictor-577c9b867b-wv48r\" (UID: \"f1857e4f-d3f8-4cbd-a1b3-93d56fa57769\") " pod="kserve-ci-e2e-test/raw-sklearn-6fff8-predictor-577c9b867b-wv48r" Apr 16 18:38:00.324963 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:00.324929 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/f1857e4f-d3f8-4cbd-a1b3-93d56fa57769-kserve-provision-location\") pod \"raw-sklearn-6fff8-predictor-577c9b867b-wv48r\" (UID: \"f1857e4f-d3f8-4cbd-a1b3-93d56fa57769\") " pod="kserve-ci-e2e-test/raw-sklearn-6fff8-predictor-577c9b867b-wv48r" Apr 16 18:38:00.325303 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:00.325286 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/f1857e4f-d3f8-4cbd-a1b3-93d56fa57769-kserve-provision-location\") pod \"raw-sklearn-6fff8-predictor-577c9b867b-wv48r\" (UID: \"f1857e4f-d3f8-4cbd-a1b3-93d56fa57769\") " pod="kserve-ci-e2e-test/raw-sklearn-6fff8-predictor-577c9b867b-wv48r" Apr 16 18:38:00.522958 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:00.522869 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/raw-sklearn-6fff8-predictor-577c9b867b-wv48r" Apr 16 18:38:00.638415 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:00.638243 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-6fff8-predictor-577c9b867b-wv48r"] Apr 16 18:38:00.641116 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:38:00.641091 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1857e4f_d3f8_4cbd_a1b3_93d56fa57769.slice/crio-da2756bbdfde2b1c3849caccd7a3783004ed68c8f028392aa6eb9759e2f4607f WatchSource:0}: Error finding container da2756bbdfde2b1c3849caccd7a3783004ed68c8f028392aa6eb9759e2f4607f: Status 404 returned error can't find the container with id da2756bbdfde2b1c3849caccd7a3783004ed68c8f028392aa6eb9759e2f4607f Apr 16 18:38:01.447222 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:01.447187 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-6fff8-predictor-577c9b867b-wv48r" event={"ID":"f1857e4f-d3f8-4cbd-a1b3-93d56fa57769","Type":"ContainerStarted","Data":"4d0b0d5d942bd7b5a6234c1657ef597a39dab717c0dc9a8005233a2d7ce1d7f7"} Apr 16 18:38:01.447222 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:01.447223 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-6fff8-predictor-577c9b867b-wv48r" event={"ID":"f1857e4f-d3f8-4cbd-a1b3-93d56fa57769","Type":"ContainerStarted","Data":"da2756bbdfde2b1c3849caccd7a3783004ed68c8f028392aa6eb9759e2f4607f"} Apr 16 18:38:02.706633 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:02.706610 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-init-fail-e032fb-predictor-55b46c576-5ln58_d3fcb851-f53a-40ba-a13c-88baf419533d/storage-initializer/1.log" Apr 16 18:38:02.706982 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:02.706949 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-init-fail-e032fb-predictor-55b46c576-5ln58_d3fcb851-f53a-40ba-a13c-88baf419533d/storage-initializer/0.log" Apr 16 18:38:02.707023 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:02.707007 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-init-fail-e032fb-predictor-55b46c576-5ln58" Apr 16 18:38:02.739951 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:02.739927 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/d3fcb851-f53a-40ba-a13c-88baf419533d-cabundle-cert\") pod \"d3fcb851-f53a-40ba-a13c-88baf419533d\" (UID: \"d3fcb851-f53a-40ba-a13c-88baf419533d\") " Apr 16 18:38:02.740078 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:02.739974 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/d3fcb851-f53a-40ba-a13c-88baf419533d-kserve-provision-location\") pod \"d3fcb851-f53a-40ba-a13c-88baf419533d\" (UID: \"d3fcb851-f53a-40ba-a13c-88baf419533d\") " Apr 16 18:38:02.740273 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:02.740250 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3fcb851-f53a-40ba-a13c-88baf419533d-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "d3fcb851-f53a-40ba-a13c-88baf419533d" (UID: "d3fcb851-f53a-40ba-a13c-88baf419533d"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 18:38:02.740324 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:02.740281 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3fcb851-f53a-40ba-a13c-88baf419533d-cabundle-cert" (OuterVolumeSpecName: "cabundle-cert") pod "d3fcb851-f53a-40ba-a13c-88baf419533d" (UID: "d3fcb851-f53a-40ba-a13c-88baf419533d"). InnerVolumeSpecName "cabundle-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 18:38:02.841382 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:02.841303 2570 reconciler_common.go:299] "Volume detached for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/d3fcb851-f53a-40ba-a13c-88baf419533d-cabundle-cert\") on node \"ip-10-0-141-219.ec2.internal\" DevicePath \"\"" Apr 16 18:38:02.841382 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:02.841338 2570 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/d3fcb851-f53a-40ba-a13c-88baf419533d-kserve-provision-location\") on node \"ip-10-0-141-219.ec2.internal\" DevicePath \"\"" Apr 16 18:38:03.453797 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:03.453767 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-init-fail-e032fb-predictor-55b46c576-5ln58_d3fcb851-f53a-40ba-a13c-88baf419533d/storage-initializer/1.log" Apr 16 18:38:03.454126 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:03.454110 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-init-fail-e032fb-predictor-55b46c576-5ln58_d3fcb851-f53a-40ba-a13c-88baf419533d/storage-initializer/0.log" Apr 16 18:38:03.454178 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:03.454146 2570 generic.go:358] "Generic (PLEG): container finished" podID="d3fcb851-f53a-40ba-a13c-88baf419533d" containerID="1827a0a271c7d0a0e53afbcb696ee828c98f8fd5f459823862d23e21d151672d" exitCode=1 Apr 16 18:38:03.454213 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:03.454186 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-init-fail-e032fb-predictor-55b46c576-5ln58" event={"ID":"d3fcb851-f53a-40ba-a13c-88baf419533d","Type":"ContainerDied","Data":"1827a0a271c7d0a0e53afbcb696ee828c98f8fd5f459823862d23e21d151672d"} Apr 16 18:38:03.454213 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:03.454208 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-init-fail-e032fb-predictor-55b46c576-5ln58" event={"ID":"d3fcb851-f53a-40ba-a13c-88baf419533d","Type":"ContainerDied","Data":"2ac4daed76f600a04123564718f1d7e9bf8dd317ca6b73f0c2b5c3821c0a3142"} Apr 16 18:38:03.454303 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:03.454217 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-init-fail-e032fb-predictor-55b46c576-5ln58" Apr 16 18:38:03.454303 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:03.454241 2570 scope.go:117] "RemoveContainer" containerID="1827a0a271c7d0a0e53afbcb696ee828c98f8fd5f459823862d23e21d151672d" Apr 16 18:38:03.461469 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:03.461449 2570 scope.go:117] "RemoveContainer" containerID="02e2c16a79f2a1c62a0f271a5163aad6d1e7f1a4d520ff01c255c331da34d621" Apr 16 18:38:03.468289 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:03.468271 2570 scope.go:117] "RemoveContainer" containerID="1827a0a271c7d0a0e53afbcb696ee828c98f8fd5f459823862d23e21d151672d" Apr 16 18:38:03.468525 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:38:03.468505 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1827a0a271c7d0a0e53afbcb696ee828c98f8fd5f459823862d23e21d151672d\": container with ID starting with 1827a0a271c7d0a0e53afbcb696ee828c98f8fd5f459823862d23e21d151672d not found: ID does not exist" containerID="1827a0a271c7d0a0e53afbcb696ee828c98f8fd5f459823862d23e21d151672d" Apr 16 18:38:03.468587 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:03.468534 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1827a0a271c7d0a0e53afbcb696ee828c98f8fd5f459823862d23e21d151672d"} err="failed to get container status \"1827a0a271c7d0a0e53afbcb696ee828c98f8fd5f459823862d23e21d151672d\": rpc error: code = NotFound desc = could not find container \"1827a0a271c7d0a0e53afbcb696ee828c98f8fd5f459823862d23e21d151672d\": container with ID starting with 1827a0a271c7d0a0e53afbcb696ee828c98f8fd5f459823862d23e21d151672d not found: ID does not exist" Apr 16 18:38:03.468587 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:03.468552 2570 scope.go:117] "RemoveContainer" containerID="02e2c16a79f2a1c62a0f271a5163aad6d1e7f1a4d520ff01c255c331da34d621" Apr 16 18:38:03.468763 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:38:03.468747 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02e2c16a79f2a1c62a0f271a5163aad6d1e7f1a4d520ff01c255c331da34d621\": container with ID starting with 02e2c16a79f2a1c62a0f271a5163aad6d1e7f1a4d520ff01c255c331da34d621 not found: ID does not exist" containerID="02e2c16a79f2a1c62a0f271a5163aad6d1e7f1a4d520ff01c255c331da34d621" Apr 16 18:38:03.468799 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:03.468768 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02e2c16a79f2a1c62a0f271a5163aad6d1e7f1a4d520ff01c255c331da34d621"} err="failed to get container status \"02e2c16a79f2a1c62a0f271a5163aad6d1e7f1a4d520ff01c255c331da34d621\": rpc error: code = NotFound desc = could not find container \"02e2c16a79f2a1c62a0f271a5163aad6d1e7f1a4d520ff01c255c331da34d621\": container with ID starting with 02e2c16a79f2a1c62a0f271a5163aad6d1e7f1a4d520ff01c255c331da34d621 not found: ID does not exist" Apr 16 18:38:03.489284 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:03.489258 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-init-fail-e032fb-predictor-55b46c576-5ln58"] Apr 16 18:38:03.495256 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:03.494175 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-init-fail-e032fb-predictor-55b46c576-5ln58"] Apr 16 18:38:03.780846 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:03.780766 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3fcb851-f53a-40ba-a13c-88baf419533d" path="/var/lib/kubelet/pods/d3fcb851-f53a-40ba-a13c-88baf419533d/volumes" Apr 16 18:38:04.458989 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:04.458955 2570 generic.go:358] "Generic (PLEG): container finished" podID="f1857e4f-d3f8-4cbd-a1b3-93d56fa57769" containerID="4d0b0d5d942bd7b5a6234c1657ef597a39dab717c0dc9a8005233a2d7ce1d7f7" exitCode=0 Apr 16 18:38:04.459128 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:04.458996 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-6fff8-predictor-577c9b867b-wv48r" event={"ID":"f1857e4f-d3f8-4cbd-a1b3-93d56fa57769","Type":"ContainerDied","Data":"4d0b0d5d942bd7b5a6234c1657ef597a39dab717c0dc9a8005233a2d7ce1d7f7"} Apr 16 18:38:05.462968 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:05.462933 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-6fff8-predictor-577c9b867b-wv48r" event={"ID":"f1857e4f-d3f8-4cbd-a1b3-93d56fa57769","Type":"ContainerStarted","Data":"bf8da4f85745577412b3fa2932a7b80c4148d21f406960bfdcd38d51f65340d1"} Apr 16 18:38:05.463472 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:05.463285 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/raw-sklearn-6fff8-predictor-577c9b867b-wv48r" Apr 16 18:38:05.464698 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:05.464673 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-6fff8-predictor-577c9b867b-wv48r" podUID="f1857e4f-d3f8-4cbd-a1b3-93d56fa57769" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.29:8080: connect: connection refused" Apr 16 18:38:05.480417 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:05.480379 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/raw-sklearn-6fff8-predictor-577c9b867b-wv48r" podStartSLOduration=5.480368919 podStartE2EDuration="5.480368919s" podCreationTimestamp="2026-04-16 18:38:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:38:05.479620386 +0000 UTC m=+1316.285373186" watchObservedRunningTime="2026-04-16 18:38:05.480368919 +0000 UTC m=+1316.286121720" Apr 16 18:38:06.466584 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:06.466543 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-6fff8-predictor-577c9b867b-wv48r" podUID="f1857e4f-d3f8-4cbd-a1b3-93d56fa57769" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.29:8080: connect: connection refused" Apr 16 18:38:16.466773 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:16.466724 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-6fff8-predictor-577c9b867b-wv48r" podUID="f1857e4f-d3f8-4cbd-a1b3-93d56fa57769" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.29:8080: connect: connection refused" Apr 16 18:38:26.466557 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:26.466510 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-6fff8-predictor-577c9b867b-wv48r" podUID="f1857e4f-d3f8-4cbd-a1b3-93d56fa57769" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.29:8080: connect: connection refused" Apr 16 18:38:36.467545 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:36.467452 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-6fff8-predictor-577c9b867b-wv48r" podUID="f1857e4f-d3f8-4cbd-a1b3-93d56fa57769" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.29:8080: connect: connection refused" Apr 16 18:38:46.466612 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:46.466567 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-6fff8-predictor-577c9b867b-wv48r" podUID="f1857e4f-d3f8-4cbd-a1b3-93d56fa57769" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.29:8080: connect: connection refused" Apr 16 18:38:56.466826 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:38:56.466777 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-6fff8-predictor-577c9b867b-wv48r" podUID="f1857e4f-d3f8-4cbd-a1b3-93d56fa57769" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.29:8080: connect: connection refused" Apr 16 18:39:06.466622 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:06.466573 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-6fff8-predictor-577c9b867b-wv48r" podUID="f1857e4f-d3f8-4cbd-a1b3-93d56fa57769" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.29:8080: connect: connection refused" Apr 16 18:39:11.781380 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:11.781351 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/raw-sklearn-6fff8-predictor-577c9b867b-wv48r" Apr 16 18:39:20.218423 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:20.218386 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-6fff8-predictor-577c9b867b-wv48r"] Apr 16 18:39:20.218969 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:20.218757 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/raw-sklearn-6fff8-predictor-577c9b867b-wv48r" podUID="f1857e4f-d3f8-4cbd-a1b3-93d56fa57769" containerName="kserve-container" containerID="cri-o://bf8da4f85745577412b3fa2932a7b80c4148d21f406960bfdcd38d51f65340d1" gracePeriod=30 Apr 16 18:39:20.294549 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:20.294517 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-runtime-aeacf-predictor-5cccdcdc7d-krsrm"] Apr 16 18:39:20.294784 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:20.294772 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d3fcb851-f53a-40ba-a13c-88baf419533d" containerName="storage-initializer" Apr 16 18:39:20.294828 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:20.294785 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3fcb851-f53a-40ba-a13c-88baf419533d" containerName="storage-initializer" Apr 16 18:39:20.294828 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:20.294799 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d3fcb851-f53a-40ba-a13c-88baf419533d" containerName="storage-initializer" Apr 16 18:39:20.294828 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:20.294805 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3fcb851-f53a-40ba-a13c-88baf419533d" containerName="storage-initializer" Apr 16 18:39:20.294917 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:20.294847 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="d3fcb851-f53a-40ba-a13c-88baf419533d" containerName="storage-initializer" Apr 16 18:39:20.294917 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:20.294858 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="d3fcb851-f53a-40ba-a13c-88baf419533d" containerName="storage-initializer" Apr 16 18:39:20.297607 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:20.297592 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/raw-sklearn-runtime-aeacf-predictor-5cccdcdc7d-krsrm" Apr 16 18:39:20.305312 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:20.305286 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-runtime-aeacf-predictor-5cccdcdc7d-krsrm"] Apr 16 18:39:20.395136 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:20.395100 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/6a8869e4-a955-4591-8e82-662628e49713-kserve-provision-location\") pod \"raw-sklearn-runtime-aeacf-predictor-5cccdcdc7d-krsrm\" (UID: \"6a8869e4-a955-4591-8e82-662628e49713\") " pod="kserve-ci-e2e-test/raw-sklearn-runtime-aeacf-predictor-5cccdcdc7d-krsrm" Apr 16 18:39:20.495793 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:20.495708 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/6a8869e4-a955-4591-8e82-662628e49713-kserve-provision-location\") pod \"raw-sklearn-runtime-aeacf-predictor-5cccdcdc7d-krsrm\" (UID: \"6a8869e4-a955-4591-8e82-662628e49713\") " pod="kserve-ci-e2e-test/raw-sklearn-runtime-aeacf-predictor-5cccdcdc7d-krsrm" Apr 16 18:39:20.496052 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:20.496036 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/6a8869e4-a955-4591-8e82-662628e49713-kserve-provision-location\") pod \"raw-sklearn-runtime-aeacf-predictor-5cccdcdc7d-krsrm\" (UID: \"6a8869e4-a955-4591-8e82-662628e49713\") " pod="kserve-ci-e2e-test/raw-sklearn-runtime-aeacf-predictor-5cccdcdc7d-krsrm" Apr 16 18:39:20.608365 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:20.608329 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/raw-sklearn-runtime-aeacf-predictor-5cccdcdc7d-krsrm" Apr 16 18:39:20.723732 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:20.723705 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-runtime-aeacf-predictor-5cccdcdc7d-krsrm"] Apr 16 18:39:20.725728 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:39:20.725692 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a8869e4_a955_4591_8e82_662628e49713.slice/crio-dc4aef2cdd5d8db5f29e806b04357b9fe6c14d0c8884da04adc0f12a35784775 WatchSource:0}: Error finding container dc4aef2cdd5d8db5f29e806b04357b9fe6c14d0c8884da04adc0f12a35784775: Status 404 returned error can't find the container with id dc4aef2cdd5d8db5f29e806b04357b9fe6c14d0c8884da04adc0f12a35784775 Apr 16 18:39:21.669926 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:21.669893 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-runtime-aeacf-predictor-5cccdcdc7d-krsrm" event={"ID":"6a8869e4-a955-4591-8e82-662628e49713","Type":"ContainerStarted","Data":"ea5828533b2f76b0daca06f8493df8771cda5ad5c1d802d78896f061c5cb419f"} Apr 16 18:39:21.669926 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:21.669928 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-runtime-aeacf-predictor-5cccdcdc7d-krsrm" event={"ID":"6a8869e4-a955-4591-8e82-662628e49713","Type":"ContainerStarted","Data":"dc4aef2cdd5d8db5f29e806b04357b9fe6c14d0c8884da04adc0f12a35784775"} Apr 16 18:39:21.778516 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:21.778469 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-6fff8-predictor-577c9b867b-wv48r" podUID="f1857e4f-d3f8-4cbd-a1b3-93d56fa57769" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.29:8080: connect: connection refused" Apr 16 18:39:24.458066 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:24.458042 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/raw-sklearn-6fff8-predictor-577c9b867b-wv48r" Apr 16 18:39:24.628327 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:24.628299 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/f1857e4f-d3f8-4cbd-a1b3-93d56fa57769-kserve-provision-location\") pod \"f1857e4f-d3f8-4cbd-a1b3-93d56fa57769\" (UID: \"f1857e4f-d3f8-4cbd-a1b3-93d56fa57769\") " Apr 16 18:39:24.628615 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:24.628591 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1857e4f-d3f8-4cbd-a1b3-93d56fa57769-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "f1857e4f-d3f8-4cbd-a1b3-93d56fa57769" (UID: "f1857e4f-d3f8-4cbd-a1b3-93d56fa57769"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 18:39:24.680268 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:24.680221 2570 generic.go:358] "Generic (PLEG): container finished" podID="6a8869e4-a955-4591-8e82-662628e49713" containerID="ea5828533b2f76b0daca06f8493df8771cda5ad5c1d802d78896f061c5cb419f" exitCode=0 Apr 16 18:39:24.680400 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:24.680296 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-runtime-aeacf-predictor-5cccdcdc7d-krsrm" event={"ID":"6a8869e4-a955-4591-8e82-662628e49713","Type":"ContainerDied","Data":"ea5828533b2f76b0daca06f8493df8771cda5ad5c1d802d78896f061c5cb419f"} Apr 16 18:39:24.681724 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:24.681702 2570 generic.go:358] "Generic (PLEG): container finished" podID="f1857e4f-d3f8-4cbd-a1b3-93d56fa57769" containerID="bf8da4f85745577412b3fa2932a7b80c4148d21f406960bfdcd38d51f65340d1" exitCode=0 Apr 16 18:39:24.681823 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:24.681748 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-6fff8-predictor-577c9b867b-wv48r" event={"ID":"f1857e4f-d3f8-4cbd-a1b3-93d56fa57769","Type":"ContainerDied","Data":"bf8da4f85745577412b3fa2932a7b80c4148d21f406960bfdcd38d51f65340d1"} Apr 16 18:39:24.681823 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:24.681762 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/raw-sklearn-6fff8-predictor-577c9b867b-wv48r" Apr 16 18:39:24.681823 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:24.681769 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-6fff8-predictor-577c9b867b-wv48r" event={"ID":"f1857e4f-d3f8-4cbd-a1b3-93d56fa57769","Type":"ContainerDied","Data":"da2756bbdfde2b1c3849caccd7a3783004ed68c8f028392aa6eb9759e2f4607f"} Apr 16 18:39:24.681823 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:24.681786 2570 scope.go:117] "RemoveContainer" containerID="bf8da4f85745577412b3fa2932a7b80c4148d21f406960bfdcd38d51f65340d1" Apr 16 18:39:24.689421 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:24.689256 2570 scope.go:117] "RemoveContainer" containerID="4d0b0d5d942bd7b5a6234c1657ef597a39dab717c0dc9a8005233a2d7ce1d7f7" Apr 16 18:39:24.695942 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:24.695875 2570 scope.go:117] "RemoveContainer" containerID="bf8da4f85745577412b3fa2932a7b80c4148d21f406960bfdcd38d51f65340d1" Apr 16 18:39:24.696459 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:39:24.696430 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf8da4f85745577412b3fa2932a7b80c4148d21f406960bfdcd38d51f65340d1\": container with ID starting with bf8da4f85745577412b3fa2932a7b80c4148d21f406960bfdcd38d51f65340d1 not found: ID does not exist" containerID="bf8da4f85745577412b3fa2932a7b80c4148d21f406960bfdcd38d51f65340d1" Apr 16 18:39:24.696560 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:24.696468 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf8da4f85745577412b3fa2932a7b80c4148d21f406960bfdcd38d51f65340d1"} err="failed to get container status \"bf8da4f85745577412b3fa2932a7b80c4148d21f406960bfdcd38d51f65340d1\": rpc error: code = NotFound desc = could not find container \"bf8da4f85745577412b3fa2932a7b80c4148d21f406960bfdcd38d51f65340d1\": container with ID starting with bf8da4f85745577412b3fa2932a7b80c4148d21f406960bfdcd38d51f65340d1 not found: ID does not exist" Apr 16 18:39:24.696560 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:24.696493 2570 scope.go:117] "RemoveContainer" containerID="4d0b0d5d942bd7b5a6234c1657ef597a39dab717c0dc9a8005233a2d7ce1d7f7" Apr 16 18:39:24.696954 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:39:24.696928 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d0b0d5d942bd7b5a6234c1657ef597a39dab717c0dc9a8005233a2d7ce1d7f7\": container with ID starting with 4d0b0d5d942bd7b5a6234c1657ef597a39dab717c0dc9a8005233a2d7ce1d7f7 not found: ID does not exist" containerID="4d0b0d5d942bd7b5a6234c1657ef597a39dab717c0dc9a8005233a2d7ce1d7f7" Apr 16 18:39:24.697017 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:24.696965 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d0b0d5d942bd7b5a6234c1657ef597a39dab717c0dc9a8005233a2d7ce1d7f7"} err="failed to get container status \"4d0b0d5d942bd7b5a6234c1657ef597a39dab717c0dc9a8005233a2d7ce1d7f7\": rpc error: code = NotFound desc = could not find container \"4d0b0d5d942bd7b5a6234c1657ef597a39dab717c0dc9a8005233a2d7ce1d7f7\": container with ID starting with 4d0b0d5d942bd7b5a6234c1657ef597a39dab717c0dc9a8005233a2d7ce1d7f7 not found: ID does not exist" Apr 16 18:39:24.709543 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:24.709515 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-6fff8-predictor-577c9b867b-wv48r"] Apr 16 18:39:24.713442 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:24.713419 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-6fff8-predictor-577c9b867b-wv48r"] Apr 16 18:39:24.729164 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:24.729136 2570 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/f1857e4f-d3f8-4cbd-a1b3-93d56fa57769-kserve-provision-location\") on node \"ip-10-0-141-219.ec2.internal\" DevicePath \"\"" Apr 16 18:39:25.686985 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:25.686952 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-runtime-aeacf-predictor-5cccdcdc7d-krsrm" event={"ID":"6a8869e4-a955-4591-8e82-662628e49713","Type":"ContainerStarted","Data":"39329b8b0e59195675845d924f0871d271e402398d084ad9eeb336deedac525c"} Apr 16 18:39:25.687454 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:25.687329 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/raw-sklearn-runtime-aeacf-predictor-5cccdcdc7d-krsrm" Apr 16 18:39:25.688676 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:25.688649 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-runtime-aeacf-predictor-5cccdcdc7d-krsrm" podUID="6a8869e4-a955-4591-8e82-662628e49713" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.30:8080: connect: connection refused" Apr 16 18:39:25.704091 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:25.704045 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/raw-sklearn-runtime-aeacf-predictor-5cccdcdc7d-krsrm" podStartSLOduration=5.704031182 podStartE2EDuration="5.704031182s" podCreationTimestamp="2026-04-16 18:39:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:39:25.70247877 +0000 UTC m=+1396.508231573" watchObservedRunningTime="2026-04-16 18:39:25.704031182 +0000 UTC m=+1396.509783983" Apr 16 18:39:25.780905 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:25.780874 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1857e4f-d3f8-4cbd-a1b3-93d56fa57769" path="/var/lib/kubelet/pods/f1857e4f-d3f8-4cbd-a1b3-93d56fa57769/volumes" Apr 16 18:39:26.689453 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:26.689409 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-runtime-aeacf-predictor-5cccdcdc7d-krsrm" podUID="6a8869e4-a955-4591-8e82-662628e49713" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.30:8080: connect: connection refused" Apr 16 18:39:36.690458 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:36.690413 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-runtime-aeacf-predictor-5cccdcdc7d-krsrm" podUID="6a8869e4-a955-4591-8e82-662628e49713" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.30:8080: connect: connection refused" Apr 16 18:39:46.690288 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:46.690215 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-runtime-aeacf-predictor-5cccdcdc7d-krsrm" podUID="6a8869e4-a955-4591-8e82-662628e49713" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.30:8080: connect: connection refused" Apr 16 18:39:56.690126 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:39:56.690078 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-runtime-aeacf-predictor-5cccdcdc7d-krsrm" podUID="6a8869e4-a955-4591-8e82-662628e49713" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.30:8080: connect: connection refused" Apr 16 18:40:06.690182 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:40:06.690095 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-runtime-aeacf-predictor-5cccdcdc7d-krsrm" podUID="6a8869e4-a955-4591-8e82-662628e49713" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.30:8080: connect: connection refused" Apr 16 18:40:16.689875 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:40:16.689828 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-runtime-aeacf-predictor-5cccdcdc7d-krsrm" podUID="6a8869e4-a955-4591-8e82-662628e49713" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.30:8080: connect: connection refused" Apr 16 18:40:26.690143 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:40:26.690087 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-runtime-aeacf-predictor-5cccdcdc7d-krsrm" podUID="6a8869e4-a955-4591-8e82-662628e49713" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.30:8080: connect: connection refused" Apr 16 18:40:36.690493 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:40:36.690442 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/raw-sklearn-runtime-aeacf-predictor-5cccdcdc7d-krsrm" Apr 16 18:40:40.403087 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:40:40.403049 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-runtime-aeacf-predictor-5cccdcdc7d-krsrm"] Apr 16 18:40:40.403502 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:40:40.403329 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/raw-sklearn-runtime-aeacf-predictor-5cccdcdc7d-krsrm" podUID="6a8869e4-a955-4591-8e82-662628e49713" containerName="kserve-container" containerID="cri-o://39329b8b0e59195675845d924f0871d271e402398d084ad9eeb336deedac525c" gracePeriod=30 Apr 16 18:40:44.830784 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:40:44.830760 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/raw-sklearn-runtime-aeacf-predictor-5cccdcdc7d-krsrm" Apr 16 18:40:44.883293 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:40:44.883267 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/6a8869e4-a955-4591-8e82-662628e49713-kserve-provision-location\") pod \"6a8869e4-a955-4591-8e82-662628e49713\" (UID: \"6a8869e4-a955-4591-8e82-662628e49713\") " Apr 16 18:40:44.883578 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:40:44.883554 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a8869e4-a955-4591-8e82-662628e49713-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "6a8869e4-a955-4591-8e82-662628e49713" (UID: "6a8869e4-a955-4591-8e82-662628e49713"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 18:40:44.897009 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:40:44.896986 2570 generic.go:358] "Generic (PLEG): container finished" podID="6a8869e4-a955-4591-8e82-662628e49713" containerID="39329b8b0e59195675845d924f0871d271e402398d084ad9eeb336deedac525c" exitCode=0 Apr 16 18:40:44.897121 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:40:44.897060 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/raw-sklearn-runtime-aeacf-predictor-5cccdcdc7d-krsrm" Apr 16 18:40:44.897121 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:40:44.897073 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-runtime-aeacf-predictor-5cccdcdc7d-krsrm" event={"ID":"6a8869e4-a955-4591-8e82-662628e49713","Type":"ContainerDied","Data":"39329b8b0e59195675845d924f0871d271e402398d084ad9eeb336deedac525c"} Apr 16 18:40:44.897121 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:40:44.897114 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-runtime-aeacf-predictor-5cccdcdc7d-krsrm" event={"ID":"6a8869e4-a955-4591-8e82-662628e49713","Type":"ContainerDied","Data":"dc4aef2cdd5d8db5f29e806b04357b9fe6c14d0c8884da04adc0f12a35784775"} Apr 16 18:40:44.897248 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:40:44.897132 2570 scope.go:117] "RemoveContainer" containerID="39329b8b0e59195675845d924f0871d271e402398d084ad9eeb336deedac525c" Apr 16 18:40:44.904449 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:40:44.904429 2570 scope.go:117] "RemoveContainer" containerID="ea5828533b2f76b0daca06f8493df8771cda5ad5c1d802d78896f061c5cb419f" Apr 16 18:40:44.911189 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:40:44.911148 2570 scope.go:117] "RemoveContainer" containerID="39329b8b0e59195675845d924f0871d271e402398d084ad9eeb336deedac525c" Apr 16 18:40:44.911622 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:40:44.911606 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39329b8b0e59195675845d924f0871d271e402398d084ad9eeb336deedac525c\": container with ID starting with 39329b8b0e59195675845d924f0871d271e402398d084ad9eeb336deedac525c not found: ID does not exist" containerID="39329b8b0e59195675845d924f0871d271e402398d084ad9eeb336deedac525c" Apr 16 18:40:44.911667 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:40:44.911632 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39329b8b0e59195675845d924f0871d271e402398d084ad9eeb336deedac525c"} err="failed to get container status \"39329b8b0e59195675845d924f0871d271e402398d084ad9eeb336deedac525c\": rpc error: code = NotFound desc = could not find container \"39329b8b0e59195675845d924f0871d271e402398d084ad9eeb336deedac525c\": container with ID starting with 39329b8b0e59195675845d924f0871d271e402398d084ad9eeb336deedac525c not found: ID does not exist" Apr 16 18:40:44.911667 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:40:44.911649 2570 scope.go:117] "RemoveContainer" containerID="ea5828533b2f76b0daca06f8493df8771cda5ad5c1d802d78896f061c5cb419f" Apr 16 18:40:44.911886 ip-10-0-141-219 kubenswrapper[2570]: E0416 18:40:44.911869 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea5828533b2f76b0daca06f8493df8771cda5ad5c1d802d78896f061c5cb419f\": container with ID starting with ea5828533b2f76b0daca06f8493df8771cda5ad5c1d802d78896f061c5cb419f not found: ID does not exist" containerID="ea5828533b2f76b0daca06f8493df8771cda5ad5c1d802d78896f061c5cb419f" Apr 16 18:40:44.911939 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:40:44.911891 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea5828533b2f76b0daca06f8493df8771cda5ad5c1d802d78896f061c5cb419f"} err="failed to get container status \"ea5828533b2f76b0daca06f8493df8771cda5ad5c1d802d78896f061c5cb419f\": rpc error: code = NotFound desc = could not find container \"ea5828533b2f76b0daca06f8493df8771cda5ad5c1d802d78896f061c5cb419f\": container with ID starting with ea5828533b2f76b0daca06f8493df8771cda5ad5c1d802d78896f061c5cb419f not found: ID does not exist" Apr 16 18:40:44.917177 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:40:44.917154 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-runtime-aeacf-predictor-5cccdcdc7d-krsrm"] Apr 16 18:40:44.922910 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:40:44.922891 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-runtime-aeacf-predictor-5cccdcdc7d-krsrm"] Apr 16 18:40:44.983781 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:40:44.983754 2570 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/6a8869e4-a955-4591-8e82-662628e49713-kserve-provision-location\") on node \"ip-10-0-141-219.ec2.internal\" DevicePath \"\"" Apr 16 18:40:45.780873 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:40:45.780839 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a8869e4-a955-4591-8e82-662628e49713" path="/var/lib/kubelet/pods/6a8869e4-a955-4591-8e82-662628e49713/volumes" Apr 16 18:41:06.562779 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:06.562738 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-chptg/must-gather-llmmr"] Apr 16 18:41:06.563183 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:06.562981 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f1857e4f-d3f8-4cbd-a1b3-93d56fa57769" containerName="kserve-container" Apr 16 18:41:06.563183 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:06.562991 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1857e4f-d3f8-4cbd-a1b3-93d56fa57769" containerName="kserve-container" Apr 16 18:41:06.563183 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:06.563008 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f1857e4f-d3f8-4cbd-a1b3-93d56fa57769" containerName="storage-initializer" Apr 16 18:41:06.563183 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:06.563013 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1857e4f-d3f8-4cbd-a1b3-93d56fa57769" containerName="storage-initializer" Apr 16 18:41:06.563183 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:06.563023 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6a8869e4-a955-4591-8e82-662628e49713" containerName="storage-initializer" Apr 16 18:41:06.563183 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:06.563030 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a8869e4-a955-4591-8e82-662628e49713" containerName="storage-initializer" Apr 16 18:41:06.563183 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:06.563038 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6a8869e4-a955-4591-8e82-662628e49713" containerName="kserve-container" Apr 16 18:41:06.563183 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:06.563043 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a8869e4-a955-4591-8e82-662628e49713" containerName="kserve-container" Apr 16 18:41:06.563183 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:06.563086 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="6a8869e4-a955-4591-8e82-662628e49713" containerName="kserve-container" Apr 16 18:41:06.563183 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:06.563096 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="f1857e4f-d3f8-4cbd-a1b3-93d56fa57769" containerName="kserve-container" Apr 16 18:41:06.567361 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:06.567343 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-chptg/must-gather-llmmr" Apr 16 18:41:06.569988 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:06.569966 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-chptg\"/\"kube-root-ca.crt\"" Apr 16 18:41:06.570100 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:06.569972 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-chptg\"/\"default-dockercfg-cnqsp\"" Apr 16 18:41:06.570270 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:06.570256 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-chptg\"/\"openshift-service-ca.crt\"" Apr 16 18:41:06.574904 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:06.574880 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-chptg/must-gather-llmmr"] Apr 16 18:41:06.637332 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:06.637304 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn8f4\" (UniqueName: \"kubernetes.io/projected/6c9580d0-4b0e-43c8-ab1b-18aaa46f8105-kube-api-access-cn8f4\") pod \"must-gather-llmmr\" (UID: \"6c9580d0-4b0e-43c8-ab1b-18aaa46f8105\") " pod="openshift-must-gather-chptg/must-gather-llmmr" Apr 16 18:41:06.637474 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:06.637344 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6c9580d0-4b0e-43c8-ab1b-18aaa46f8105-must-gather-output\") pod \"must-gather-llmmr\" (UID: \"6c9580d0-4b0e-43c8-ab1b-18aaa46f8105\") " pod="openshift-must-gather-chptg/must-gather-llmmr" Apr 16 18:41:06.738282 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:06.738247 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cn8f4\" (UniqueName: \"kubernetes.io/projected/6c9580d0-4b0e-43c8-ab1b-18aaa46f8105-kube-api-access-cn8f4\") pod \"must-gather-llmmr\" (UID: \"6c9580d0-4b0e-43c8-ab1b-18aaa46f8105\") " pod="openshift-must-gather-chptg/must-gather-llmmr" Apr 16 18:41:06.738472 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:06.738299 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6c9580d0-4b0e-43c8-ab1b-18aaa46f8105-must-gather-output\") pod \"must-gather-llmmr\" (UID: \"6c9580d0-4b0e-43c8-ab1b-18aaa46f8105\") " pod="openshift-must-gather-chptg/must-gather-llmmr" Apr 16 18:41:06.738623 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:06.738602 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6c9580d0-4b0e-43c8-ab1b-18aaa46f8105-must-gather-output\") pod \"must-gather-llmmr\" (UID: \"6c9580d0-4b0e-43c8-ab1b-18aaa46f8105\") " pod="openshift-must-gather-chptg/must-gather-llmmr" Apr 16 18:41:06.746917 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:06.746893 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn8f4\" (UniqueName: \"kubernetes.io/projected/6c9580d0-4b0e-43c8-ab1b-18aaa46f8105-kube-api-access-cn8f4\") pod \"must-gather-llmmr\" (UID: \"6c9580d0-4b0e-43c8-ab1b-18aaa46f8105\") " pod="openshift-must-gather-chptg/must-gather-llmmr" Apr 16 18:41:06.876382 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:06.876352 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-chptg/must-gather-llmmr" Apr 16 18:41:06.996473 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:06.996442 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-chptg/must-gather-llmmr"] Apr 16 18:41:06.999758 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:41:06.999724 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c9580d0_4b0e_43c8_ab1b_18aaa46f8105.slice/crio-f98aa6093580259af11a948bfb74c0e8bf5e2bf6eb39ed8f2b32e90b0021c242 WatchSource:0}: Error finding container f98aa6093580259af11a948bfb74c0e8bf5e2bf6eb39ed8f2b32e90b0021c242: Status 404 returned error can't find the container with id f98aa6093580259af11a948bfb74c0e8bf5e2bf6eb39ed8f2b32e90b0021c242 Apr 16 18:41:07.958204 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:07.958125 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-chptg/must-gather-llmmr" event={"ID":"6c9580d0-4b0e-43c8-ab1b-18aaa46f8105","Type":"ContainerStarted","Data":"3edbaab2564d5912e0c861bc56d49747078139599225eadffea14989e9b3e32e"} Apr 16 18:41:07.958204 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:07.958175 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-chptg/must-gather-llmmr" event={"ID":"6c9580d0-4b0e-43c8-ab1b-18aaa46f8105","Type":"ContainerStarted","Data":"f98aa6093580259af11a948bfb74c0e8bf5e2bf6eb39ed8f2b32e90b0021c242"} Apr 16 18:41:08.963128 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:08.963086 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-chptg/must-gather-llmmr" event={"ID":"6c9580d0-4b0e-43c8-ab1b-18aaa46f8105","Type":"ContainerStarted","Data":"a515d22dcc54b6aaf9f63f608d6b325fae4ae5ed23c165fbd2cd2c2be92e0152"} Apr 16 18:41:08.979730 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:08.979681 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-chptg/must-gather-llmmr" podStartSLOduration=2.186303323 podStartE2EDuration="2.979664481s" podCreationTimestamp="2026-04-16 18:41:06 +0000 UTC" firstStartedPulling="2026-04-16 18:41:07.001411685 +0000 UTC m=+1497.807164463" lastFinishedPulling="2026-04-16 18:41:07.794772838 +0000 UTC m=+1498.600525621" observedRunningTime="2026-04-16 18:41:08.977781592 +0000 UTC m=+1499.783534394" watchObservedRunningTime="2026-04-16 18:41:08.979664481 +0000 UTC m=+1499.785417314" Apr 16 18:41:09.255979 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:09.255901 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_global-pull-secret-syncer-7b74c_579176b9-8011-401d-aee2-a97cda1ea10f/global-pull-secret-syncer/0.log" Apr 16 18:41:09.431238 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:09.431200 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_konnectivity-agent-jkbvg_b0897e32-576e-42ee-a9c4-bf56f480aba0/konnectivity-agent/0.log" Apr 16 18:41:09.533758 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:09.533673 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_kube-apiserver-proxy-ip-10-0-141-219.ec2.internal_4f9f135ae9937d0daa3eb597d8fe2521/haproxy/0.log" Apr 16 18:41:09.748123 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:09.748090 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hls95_9eda7e8d-1d99-41d3-acfb-b6c80829811c/ovn-acl-logging/0.log" Apr 16 18:41:09.750128 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:09.750107 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hls95_9eda7e8d-1d99-41d3-acfb-b6c80829811c/ovn-acl-logging/0.log" Apr 16 18:41:13.471892 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:13.471861 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-nq9cn_438f800a-174a-40f5-9292-468d97227591/node-exporter/0.log" Apr 16 18:41:13.492568 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:13.492545 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-nq9cn_438f800a-174a-40f5-9292-468d97227591/kube-rbac-proxy/0.log" Apr 16 18:41:13.513216 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:13.513187 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-nq9cn_438f800a-174a-40f5-9292-468d97227591/init-textfile/0.log" Apr 16 18:41:15.177977 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:15.177945 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-console_networking-console-plugin-5cb6cf4cb4-297lb_90976564-8bbb-407b-a345-f362c0c02c2d/networking-console-plugin/0.log" Apr 16 18:41:16.437870 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:16.437840 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-chptg/perf-node-gather-daemonset-2p997"] Apr 16 18:41:16.440787 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:16.440763 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-chptg/perf-node-gather-daemonset-2p997" Apr 16 18:41:16.449827 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:16.449801 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-chptg/perf-node-gather-daemonset-2p997"] Apr 16 18:41:16.520866 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:16.520832 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2ef5553a-b032-4d4c-8467-b8f3735c17cb-sys\") pod \"perf-node-gather-daemonset-2p997\" (UID: \"2ef5553a-b032-4d4c-8467-b8f3735c17cb\") " pod="openshift-must-gather-chptg/perf-node-gather-daemonset-2p997" Apr 16 18:41:16.521029 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:16.520878 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cct59\" (UniqueName: \"kubernetes.io/projected/2ef5553a-b032-4d4c-8467-b8f3735c17cb-kube-api-access-cct59\") pod \"perf-node-gather-daemonset-2p997\" (UID: \"2ef5553a-b032-4d4c-8467-b8f3735c17cb\") " pod="openshift-must-gather-chptg/perf-node-gather-daemonset-2p997" Apr 16 18:41:16.521029 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:16.520947 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ef5553a-b032-4d4c-8467-b8f3735c17cb-lib-modules\") pod \"perf-node-gather-daemonset-2p997\" (UID: \"2ef5553a-b032-4d4c-8467-b8f3735c17cb\") " pod="openshift-must-gather-chptg/perf-node-gather-daemonset-2p997" Apr 16 18:41:16.521104 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:16.521044 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/2ef5553a-b032-4d4c-8467-b8f3735c17cb-podres\") pod \"perf-node-gather-daemonset-2p997\" (UID: \"2ef5553a-b032-4d4c-8467-b8f3735c17cb\") " pod="openshift-must-gather-chptg/perf-node-gather-daemonset-2p997" Apr 16 18:41:16.521104 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:16.521080 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/2ef5553a-b032-4d4c-8467-b8f3735c17cb-proc\") pod \"perf-node-gather-daemonset-2p997\" (UID: \"2ef5553a-b032-4d4c-8467-b8f3735c17cb\") " pod="openshift-must-gather-chptg/perf-node-gather-daemonset-2p997" Apr 16 18:41:16.621605 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:16.621573 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cct59\" (UniqueName: \"kubernetes.io/projected/2ef5553a-b032-4d4c-8467-b8f3735c17cb-kube-api-access-cct59\") pod \"perf-node-gather-daemonset-2p997\" (UID: \"2ef5553a-b032-4d4c-8467-b8f3735c17cb\") " pod="openshift-must-gather-chptg/perf-node-gather-daemonset-2p997" Apr 16 18:41:16.621799 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:16.621616 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ef5553a-b032-4d4c-8467-b8f3735c17cb-lib-modules\") pod \"perf-node-gather-daemonset-2p997\" (UID: \"2ef5553a-b032-4d4c-8467-b8f3735c17cb\") " pod="openshift-must-gather-chptg/perf-node-gather-daemonset-2p997" Apr 16 18:41:16.621799 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:16.621691 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/2ef5553a-b032-4d4c-8467-b8f3735c17cb-podres\") pod \"perf-node-gather-daemonset-2p997\" (UID: \"2ef5553a-b032-4d4c-8467-b8f3735c17cb\") " pod="openshift-must-gather-chptg/perf-node-gather-daemonset-2p997" Apr 16 18:41:16.621799 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:16.621720 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/2ef5553a-b032-4d4c-8467-b8f3735c17cb-proc\") pod \"perf-node-gather-daemonset-2p997\" (UID: \"2ef5553a-b032-4d4c-8467-b8f3735c17cb\") " pod="openshift-must-gather-chptg/perf-node-gather-daemonset-2p997" Apr 16 18:41:16.621799 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:16.621762 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2ef5553a-b032-4d4c-8467-b8f3735c17cb-sys\") pod \"perf-node-gather-daemonset-2p997\" (UID: \"2ef5553a-b032-4d4c-8467-b8f3735c17cb\") " pod="openshift-must-gather-chptg/perf-node-gather-daemonset-2p997" Apr 16 18:41:16.621980 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:16.621838 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ef5553a-b032-4d4c-8467-b8f3735c17cb-lib-modules\") pod \"perf-node-gather-daemonset-2p997\" (UID: \"2ef5553a-b032-4d4c-8467-b8f3735c17cb\") " pod="openshift-must-gather-chptg/perf-node-gather-daemonset-2p997" Apr 16 18:41:16.621980 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:16.621854 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/2ef5553a-b032-4d4c-8467-b8f3735c17cb-proc\") pod \"perf-node-gather-daemonset-2p997\" (UID: \"2ef5553a-b032-4d4c-8467-b8f3735c17cb\") " pod="openshift-must-gather-chptg/perf-node-gather-daemonset-2p997" Apr 16 18:41:16.621980 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:16.621849 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2ef5553a-b032-4d4c-8467-b8f3735c17cb-sys\") pod \"perf-node-gather-daemonset-2p997\" (UID: \"2ef5553a-b032-4d4c-8467-b8f3735c17cb\") " pod="openshift-must-gather-chptg/perf-node-gather-daemonset-2p997" Apr 16 18:41:16.621980 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:16.621892 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/2ef5553a-b032-4d4c-8467-b8f3735c17cb-podres\") pod \"perf-node-gather-daemonset-2p997\" (UID: \"2ef5553a-b032-4d4c-8467-b8f3735c17cb\") " pod="openshift-must-gather-chptg/perf-node-gather-daemonset-2p997" Apr 16 18:41:16.629867 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:16.629836 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cct59\" (UniqueName: \"kubernetes.io/projected/2ef5553a-b032-4d4c-8467-b8f3735c17cb-kube-api-access-cct59\") pod \"perf-node-gather-daemonset-2p997\" (UID: \"2ef5553a-b032-4d4c-8467-b8f3735c17cb\") " pod="openshift-must-gather-chptg/perf-node-gather-daemonset-2p997" Apr 16 18:41:16.751273 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:16.751171 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-chptg/perf-node-gather-daemonset-2p997" Apr 16 18:41:16.888730 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:16.888651 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-chptg/perf-node-gather-daemonset-2p997"] Apr 16 18:41:16.891290 ip-10-0-141-219 kubenswrapper[2570]: W0416 18:41:16.891263 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod2ef5553a_b032_4d4c_8467_b8f3735c17cb.slice/crio-3c931dd4303cd9a90d90accc7efc3b32ab97b8c8646f2a4a10e6fcafa9b1ba2e WatchSource:0}: Error finding container 3c931dd4303cd9a90d90accc7efc3b32ab97b8c8646f2a4a10e6fcafa9b1ba2e: Status 404 returned error can't find the container with id 3c931dd4303cd9a90d90accc7efc3b32ab97b8c8646f2a4a10e6fcafa9b1ba2e Apr 16 18:41:16.892931 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:16.892914 2570 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 18:41:17.004014 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:17.003350 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-chptg/perf-node-gather-daemonset-2p997" event={"ID":"2ef5553a-b032-4d4c-8467-b8f3735c17cb","Type":"ContainerStarted","Data":"a6745ff76433644fa3d487fb180b2599eaa409ebb0fa344eaf90a03077112896"} Apr 16 18:41:17.004014 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:17.003398 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-chptg/perf-node-gather-daemonset-2p997" event={"ID":"2ef5553a-b032-4d4c-8467-b8f3735c17cb","Type":"ContainerStarted","Data":"3c931dd4303cd9a90d90accc7efc3b32ab97b8c8646f2a4a10e6fcafa9b1ba2e"} Apr 16 18:41:17.004014 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:17.003815 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-must-gather-chptg/perf-node-gather-daemonset-2p997" Apr 16 18:41:17.026062 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:17.026017 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-chptg/perf-node-gather-daemonset-2p997" podStartSLOduration=1.026003279 podStartE2EDuration="1.026003279s" podCreationTimestamp="2026-04-16 18:41:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:41:17.025915059 +0000 UTC m=+1507.831667860" watchObservedRunningTime="2026-04-16 18:41:17.026003279 +0000 UTC m=+1507.831756077" Apr 16 18:41:17.230053 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:17.230025 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-gd5b4_be0027a7-e3ae-4c79-8020-883f6b6eda09/dns/0.log" Apr 16 18:41:17.253393 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:17.253364 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-gd5b4_be0027a7-e3ae-4c79-8020-883f6b6eda09/kube-rbac-proxy/0.log" Apr 16 18:41:17.366296 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:17.366211 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-svsjw_c77bc112-2094-4908-98e9-9722eea678f2/dns-node-resolver/0.log" Apr 16 18:41:17.819535 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:17.819507 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-d4t7h_fbc7bbf3-3e05-4fdd-ad20-de2ed4f13e8d/node-ca/0.log" Apr 16 18:41:18.923144 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:18.923111 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-tblwn_63acc75e-52de-45b3-a91a-8c41889d9a55/serve-healthcheck-canary/0.log" Apr 16 18:41:19.304210 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:19.304134 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-9z848_25310a19-0ed5-4bcc-831c-55862fdf6d2f/kube-rbac-proxy/0.log" Apr 16 18:41:19.324264 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:19.324240 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-9z848_25310a19-0ed5-4bcc-831c-55862fdf6d2f/exporter/0.log" Apr 16 18:41:19.344512 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:19.344490 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-9z848_25310a19-0ed5-4bcc-831c-55862fdf6d2f/extractor/0.log" Apr 16 18:41:21.416744 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:21.416708 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_kserve-controller-manager-7668d57578-zj268_67ac5895-1883-4098-95c0-f84adec6a489/manager/0.log" Apr 16 18:41:21.578150 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:21.578118 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_seaweedfs-86cc847c5c-mq58k_5dd6cdbb-17d6-49da-9a47-c56a45bfda30/seaweedfs/0.log" Apr 16 18:41:23.018860 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:23.018832 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-chptg/perf-node-gather-daemonset-2p997" Apr 16 18:41:26.719198 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:26.719168 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-65f6j_f8eeffdd-37a1-4898-94ea-20c490313c34/kube-multus/0.log" Apr 16 18:41:27.078378 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:27.078307 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lxx4t_259c30de-27f1-414c-b384-b90b6e241cd8/kube-multus-additional-cni-plugins/0.log" Apr 16 18:41:27.101003 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:27.100979 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lxx4t_259c30de-27f1-414c-b384-b90b6e241cd8/egress-router-binary-copy/0.log" Apr 16 18:41:27.125742 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:27.125715 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lxx4t_259c30de-27f1-414c-b384-b90b6e241cd8/cni-plugins/0.log" Apr 16 18:41:27.148665 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:27.148642 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lxx4t_259c30de-27f1-414c-b384-b90b6e241cd8/bond-cni-plugin/0.log" Apr 16 18:41:27.169016 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:27.168987 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lxx4t_259c30de-27f1-414c-b384-b90b6e241cd8/routeoverride-cni/0.log" Apr 16 18:41:27.192507 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:27.192474 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lxx4t_259c30de-27f1-414c-b384-b90b6e241cd8/whereabouts-cni-bincopy/0.log" Apr 16 18:41:27.212920 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:27.212894 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lxx4t_259c30de-27f1-414c-b384-b90b6e241cd8/whereabouts-cni/0.log" Apr 16 18:41:27.340849 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:27.340777 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-hgcdt_5af0e6ec-389a-47dd-afc0-725b505e4635/network-metrics-daemon/0.log" Apr 16 18:41:27.360949 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:27.360923 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-hgcdt_5af0e6ec-389a-47dd-afc0-725b505e4635/kube-rbac-proxy/0.log" Apr 16 18:41:28.160994 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:28.160962 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hls95_9eda7e8d-1d99-41d3-acfb-b6c80829811c/ovn-controller/0.log" Apr 16 18:41:28.180783 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:28.180758 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hls95_9eda7e8d-1d99-41d3-acfb-b6c80829811c/ovn-acl-logging/0.log" Apr 16 18:41:28.187302 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:28.187279 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hls95_9eda7e8d-1d99-41d3-acfb-b6c80829811c/ovn-acl-logging/1.log" Apr 16 18:41:28.208260 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:28.208217 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hls95_9eda7e8d-1d99-41d3-acfb-b6c80829811c/kube-rbac-proxy-node/0.log" Apr 16 18:41:28.231629 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:28.231602 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hls95_9eda7e8d-1d99-41d3-acfb-b6c80829811c/kube-rbac-proxy-ovn-metrics/0.log" Apr 16 18:41:28.254075 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:28.254053 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hls95_9eda7e8d-1d99-41d3-acfb-b6c80829811c/northd/0.log" Apr 16 18:41:28.278347 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:28.278318 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hls95_9eda7e8d-1d99-41d3-acfb-b6c80829811c/nbdb/0.log" Apr 16 18:41:28.306770 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:28.306700 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hls95_9eda7e8d-1d99-41d3-acfb-b6c80829811c/sbdb/0.log" Apr 16 18:41:28.408497 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:28.408468 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hls95_9eda7e8d-1d99-41d3-acfb-b6c80829811c/ovnkube-controller/0.log" Apr 16 18:41:30.146205 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:30.146125 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-target-fzg6h_6d72f360-ffda-4447-8b43-c1059ff81bf3/network-check-target-container/0.log" Apr 16 18:41:31.124749 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:31.124720 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_iptables-alerter-5jc6x_1b14fadb-4a71-439d-84de-91c5c3e29811/iptables-alerter/0.log" Apr 16 18:41:31.780283 ip-10-0-141-219 kubenswrapper[2570]: I0416 18:41:31.780220 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_tuned-5z2wk_29f69f9c-834d-4ff7-92ac-005e00d0651c/tuned/0.log"