Apr 24 14:23:41.753418 ip-10-0-129-34 systemd[1]: Starting Kubernetes Kubelet... Apr 24 14:23:42.206821 ip-10-0-129-34 kubenswrapper[2570]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 14:23:42.206821 ip-10-0-129-34 kubenswrapper[2570]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Apr 24 14:23:42.206821 ip-10-0-129-34 kubenswrapper[2570]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 14:23:42.206821 ip-10-0-129-34 kubenswrapper[2570]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 14:23:42.206821 ip-10-0-129-34 kubenswrapper[2570]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 14:23:42.209334 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.209240 2570 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 14:23:42.214369 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214346 2570 feature_gate.go:328] unrecognized feature gate: Example2 Apr 24 14:23:42.214369 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214366 2570 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 24 14:23:42.214369 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214370 2570 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 24 14:23:42.214369 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214373 2570 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 24 14:23:42.214369 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214377 2570 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 24 14:23:42.214557 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214380 2570 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 24 14:23:42.214557 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214384 2570 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 24 14:23:42.214557 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214387 2570 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 24 14:23:42.214557 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214390 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 24 14:23:42.214557 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214392 2570 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 24 14:23:42.214557 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214395 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 24 14:23:42.214557 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214398 2570 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 24 14:23:42.214557 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214401 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 24 14:23:42.214557 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214403 2570 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 24 14:23:42.214557 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214406 2570 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 24 14:23:42.214557 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214409 2570 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 24 14:23:42.214557 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214412 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 24 14:23:42.214557 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214415 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 24 14:23:42.214557 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214417 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 24 14:23:42.214557 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214420 2570 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 24 14:23:42.214557 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214422 2570 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 24 14:23:42.214557 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214425 2570 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 24 14:23:42.214557 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214428 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 24 14:23:42.214557 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214430 2570 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 24 14:23:42.214557 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214433 2570 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 24 14:23:42.215030 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214435 2570 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 24 14:23:42.215030 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214438 2570 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 24 14:23:42.215030 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214440 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 24 14:23:42.215030 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214443 2570 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 24 14:23:42.215030 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214445 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 24 14:23:42.215030 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214448 2570 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 24 14:23:42.215030 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214450 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 24 14:23:42.215030 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214460 2570 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 24 14:23:42.215030 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214463 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 24 14:23:42.215030 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214465 2570 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 24 14:23:42.215030 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214468 2570 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 24 14:23:42.215030 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214470 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 24 14:23:42.215030 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214473 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 24 14:23:42.215030 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214477 2570 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 24 14:23:42.215030 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214480 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 24 14:23:42.215030 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214482 2570 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 24 14:23:42.215030 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214485 2570 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 24 14:23:42.215030 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214488 2570 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 24 14:23:42.215030 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214490 2570 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 24 14:23:42.215030 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214493 2570 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 24 14:23:42.215541 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214496 2570 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 24 14:23:42.215541 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214499 2570 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 24 14:23:42.215541 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214501 2570 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 24 14:23:42.215541 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214504 2570 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 24 14:23:42.215541 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214507 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 24 14:23:42.215541 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214510 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 24 14:23:42.215541 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214512 2570 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 24 14:23:42.215541 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214514 2570 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 24 14:23:42.215541 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214518 2570 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 24 14:23:42.215541 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214521 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 24 14:23:42.215541 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214523 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 24 14:23:42.215541 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214525 2570 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 24 14:23:42.215541 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214529 2570 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 24 14:23:42.215541 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214534 2570 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 24 14:23:42.215541 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214538 2570 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 24 14:23:42.215541 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214542 2570 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 24 14:23:42.215541 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214546 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 24 14:23:42.215541 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214549 2570 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 24 14:23:42.215541 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214552 2570 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 24 14:23:42.216038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214555 2570 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 24 14:23:42.216038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214557 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 24 14:23:42.216038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214560 2570 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 24 14:23:42.216038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214563 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 24 14:23:42.216038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214566 2570 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 24 14:23:42.216038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214569 2570 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 24 14:23:42.216038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214572 2570 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 24 14:23:42.216038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214575 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 24 14:23:42.216038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214578 2570 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 24 14:23:42.216038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214581 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 24 14:23:42.216038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214583 2570 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 24 14:23:42.216038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214586 2570 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 24 14:23:42.216038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214589 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 24 14:23:42.216038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214592 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 24 14:23:42.216038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214596 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 24 14:23:42.216038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214598 2570 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 24 14:23:42.216038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214602 2570 feature_gate.go:328] unrecognized feature gate: Example Apr 24 14:23:42.216038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214605 2570 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 24 14:23:42.216038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214607 2570 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 24 14:23:42.216038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214610 2570 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 24 14:23:42.216550 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214612 2570 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 24 14:23:42.216550 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.214615 2570 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 24 14:23:42.216550 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215003 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 24 14:23:42.216550 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215008 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 24 14:23:42.216550 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215011 2570 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 24 14:23:42.216550 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215013 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 24 14:23:42.216550 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215018 2570 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 24 14:23:42.216550 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215022 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 24 14:23:42.216550 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215025 2570 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 24 14:23:42.216550 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215028 2570 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 24 14:23:42.216550 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215031 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 24 14:23:42.216550 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215034 2570 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 24 14:23:42.216550 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215037 2570 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 24 14:23:42.216550 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215040 2570 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 24 14:23:42.216550 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215043 2570 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 24 14:23:42.216550 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215046 2570 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 24 14:23:42.216550 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215049 2570 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 24 14:23:42.216550 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215053 2570 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 24 14:23:42.216550 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215057 2570 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 24 14:23:42.216550 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215060 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 24 14:23:42.217045 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215062 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 24 14:23:42.217045 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215065 2570 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 24 14:23:42.217045 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215069 2570 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 24 14:23:42.217045 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215072 2570 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 24 14:23:42.217045 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215075 2570 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 24 14:23:42.217045 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215078 2570 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 24 14:23:42.217045 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215081 2570 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 24 14:23:42.217045 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215083 2570 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 24 14:23:42.217045 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215086 2570 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 24 14:23:42.217045 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215088 2570 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 24 14:23:42.217045 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215091 2570 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 24 14:23:42.217045 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215093 2570 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 24 14:23:42.217045 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215096 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 24 14:23:42.217045 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215111 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 24 14:23:42.217045 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215114 2570 feature_gate.go:328] unrecognized feature gate: Example2 Apr 24 14:23:42.217045 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215117 2570 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 24 14:23:42.217045 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215120 2570 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 24 14:23:42.217045 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215122 2570 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 24 14:23:42.217045 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215125 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 24 14:23:42.217521 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215127 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 24 14:23:42.217521 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215130 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 24 14:23:42.217521 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215132 2570 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 24 14:23:42.217521 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215135 2570 feature_gate.go:328] unrecognized feature gate: Example Apr 24 14:23:42.217521 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215138 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 24 14:23:42.217521 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215140 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 24 14:23:42.217521 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215143 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 24 14:23:42.217521 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215145 2570 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 24 14:23:42.217521 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215148 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 24 14:23:42.217521 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215152 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 24 14:23:42.217521 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215155 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 24 14:23:42.217521 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215158 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 24 14:23:42.217521 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215161 2570 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 24 14:23:42.217521 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215163 2570 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 24 14:23:42.217521 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215166 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 24 14:23:42.217521 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215169 2570 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 24 14:23:42.217521 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215171 2570 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 24 14:23:42.217521 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215174 2570 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 24 14:23:42.217521 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215176 2570 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 24 14:23:42.217521 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215178 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 24 14:23:42.218013 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215182 2570 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 24 14:23:42.218013 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215185 2570 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 24 14:23:42.218013 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215187 2570 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 24 14:23:42.218013 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215190 2570 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 24 14:23:42.218013 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215192 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 24 14:23:42.218013 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215195 2570 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 24 14:23:42.218013 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215197 2570 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 24 14:23:42.218013 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215200 2570 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 24 14:23:42.218013 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215202 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 24 14:23:42.218013 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215205 2570 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 24 14:23:42.218013 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215207 2570 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 24 14:23:42.218013 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215210 2570 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 24 14:23:42.218013 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215212 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 24 14:23:42.218013 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215214 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 24 14:23:42.218013 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215217 2570 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 24 14:23:42.218013 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215219 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 24 14:23:42.218013 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215222 2570 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 24 14:23:42.218013 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215224 2570 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 24 14:23:42.218013 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215227 2570 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 24 14:23:42.218013 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215229 2570 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 24 14:23:42.218514 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215233 2570 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 24 14:23:42.218514 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215235 2570 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 24 14:23:42.218514 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215238 2570 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 24 14:23:42.218514 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215241 2570 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 24 14:23:42.218514 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215244 2570 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 24 14:23:42.218514 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215246 2570 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 24 14:23:42.218514 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215249 2570 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 24 14:23:42.218514 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215252 2570 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 24 14:23:42.218514 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.215254 2570 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 24 14:23:42.218514 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216836 2570 flags.go:64] FLAG: --address="0.0.0.0" Apr 24 14:23:42.218514 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216845 2570 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Apr 24 14:23:42.218514 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216852 2570 flags.go:64] FLAG: --anonymous-auth="true" Apr 24 14:23:42.218514 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216856 2570 flags.go:64] FLAG: --application-metrics-count-limit="100" Apr 24 14:23:42.218514 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216861 2570 flags.go:64] FLAG: --authentication-token-webhook="false" Apr 24 14:23:42.218514 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216865 2570 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Apr 24 14:23:42.218514 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216870 2570 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Apr 24 14:23:42.218514 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216874 2570 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Apr 24 14:23:42.218514 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216877 2570 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Apr 24 14:23:42.218514 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216881 2570 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Apr 24 14:23:42.218514 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216884 2570 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Apr 24 14:23:42.218514 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216888 2570 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Apr 24 14:23:42.218514 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216891 2570 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Apr 24 14:23:42.219042 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216894 2570 flags.go:64] FLAG: --cgroup-root="" Apr 24 14:23:42.219042 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216897 2570 flags.go:64] FLAG: --cgroups-per-qos="true" Apr 24 14:23:42.219042 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216900 2570 flags.go:64] FLAG: --client-ca-file="" Apr 24 14:23:42.219042 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216903 2570 flags.go:64] FLAG: --cloud-config="" Apr 24 14:23:42.219042 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216906 2570 flags.go:64] FLAG: --cloud-provider="external" Apr 24 14:23:42.219042 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216908 2570 flags.go:64] FLAG: --cluster-dns="[]" Apr 24 14:23:42.219042 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216912 2570 flags.go:64] FLAG: --cluster-domain="" Apr 24 14:23:42.219042 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216915 2570 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Apr 24 14:23:42.219042 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216918 2570 flags.go:64] FLAG: --config-dir="" Apr 24 14:23:42.219042 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216921 2570 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Apr 24 14:23:42.219042 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216925 2570 flags.go:64] FLAG: --container-log-max-files="5" Apr 24 14:23:42.219042 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216929 2570 flags.go:64] FLAG: --container-log-max-size="10Mi" Apr 24 14:23:42.219042 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216932 2570 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Apr 24 14:23:42.219042 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216935 2570 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Apr 24 14:23:42.219042 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216939 2570 flags.go:64] FLAG: --containerd-namespace="k8s.io" Apr 24 14:23:42.219042 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216942 2570 flags.go:64] FLAG: --contention-profiling="false" Apr 24 14:23:42.219042 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216945 2570 flags.go:64] FLAG: --cpu-cfs-quota="true" Apr 24 14:23:42.219042 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216948 2570 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Apr 24 14:23:42.219042 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216951 2570 flags.go:64] FLAG: --cpu-manager-policy="none" Apr 24 14:23:42.219042 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216955 2570 flags.go:64] FLAG: --cpu-manager-policy-options="" Apr 24 14:23:42.219042 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216959 2570 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Apr 24 14:23:42.219042 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216962 2570 flags.go:64] FLAG: --enable-controller-attach-detach="true" Apr 24 14:23:42.219042 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216965 2570 flags.go:64] FLAG: --enable-debugging-handlers="true" Apr 24 14:23:42.219042 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216967 2570 flags.go:64] FLAG: --enable-load-reader="false" Apr 24 14:23:42.219042 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216971 2570 flags.go:64] FLAG: --enable-server="true" Apr 24 14:23:42.219654 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216974 2570 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Apr 24 14:23:42.219654 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216979 2570 flags.go:64] FLAG: --event-burst="100" Apr 24 14:23:42.219654 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216982 2570 flags.go:64] FLAG: --event-qps="50" Apr 24 14:23:42.219654 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216985 2570 flags.go:64] FLAG: --event-storage-age-limit="default=0" Apr 24 14:23:42.219654 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216988 2570 flags.go:64] FLAG: --event-storage-event-limit="default=0" Apr 24 14:23:42.219654 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216991 2570 flags.go:64] FLAG: --eviction-hard="" Apr 24 14:23:42.219654 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216995 2570 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Apr 24 14:23:42.219654 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.216998 2570 flags.go:64] FLAG: --eviction-minimum-reclaim="" Apr 24 14:23:42.219654 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217001 2570 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Apr 24 14:23:42.219654 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217004 2570 flags.go:64] FLAG: --eviction-soft="" Apr 24 14:23:42.219654 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217007 2570 flags.go:64] FLAG: --eviction-soft-grace-period="" Apr 24 14:23:42.219654 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217011 2570 flags.go:64] FLAG: --exit-on-lock-contention="false" Apr 24 14:23:42.219654 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217013 2570 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Apr 24 14:23:42.219654 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217016 2570 flags.go:64] FLAG: --experimental-mounter-path="" Apr 24 14:23:42.219654 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217019 2570 flags.go:64] FLAG: --fail-cgroupv1="false" Apr 24 14:23:42.219654 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217022 2570 flags.go:64] FLAG: --fail-swap-on="true" Apr 24 14:23:42.219654 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217025 2570 flags.go:64] FLAG: --feature-gates="" Apr 24 14:23:42.219654 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217029 2570 flags.go:64] FLAG: --file-check-frequency="20s" Apr 24 14:23:42.219654 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217032 2570 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Apr 24 14:23:42.219654 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217035 2570 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Apr 24 14:23:42.219654 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217039 2570 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Apr 24 14:23:42.219654 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217042 2570 flags.go:64] FLAG: --healthz-port="10248" Apr 24 14:23:42.219654 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217044 2570 flags.go:64] FLAG: --help="false" Apr 24 14:23:42.219654 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217047 2570 flags.go:64] FLAG: --hostname-override="ip-10-0-129-34.ec2.internal" Apr 24 14:23:42.219654 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217050 2570 flags.go:64] FLAG: --housekeeping-interval="10s" Apr 24 14:23:42.220275 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217053 2570 flags.go:64] FLAG: --http-check-frequency="20s" Apr 24 14:23:42.220275 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217056 2570 flags.go:64] FLAG: --image-credential-provider-bin-dir="/usr/libexec/kubelet-image-credential-provider-plugins" Apr 24 14:23:42.220275 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217060 2570 flags.go:64] FLAG: --image-credential-provider-config="/etc/kubernetes/credential-providers/ecr-credential-provider.yaml" Apr 24 14:23:42.220275 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217063 2570 flags.go:64] FLAG: --image-gc-high-threshold="85" Apr 24 14:23:42.220275 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217066 2570 flags.go:64] FLAG: --image-gc-low-threshold="80" Apr 24 14:23:42.220275 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217069 2570 flags.go:64] FLAG: --image-service-endpoint="" Apr 24 14:23:42.220275 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217072 2570 flags.go:64] FLAG: --kernel-memcg-notification="false" Apr 24 14:23:42.220275 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217078 2570 flags.go:64] FLAG: --kube-api-burst="100" Apr 24 14:23:42.220275 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217081 2570 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Apr 24 14:23:42.220275 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217085 2570 flags.go:64] FLAG: --kube-api-qps="50" Apr 24 14:23:42.220275 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217088 2570 flags.go:64] FLAG: --kube-reserved="" Apr 24 14:23:42.220275 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217092 2570 flags.go:64] FLAG: --kube-reserved-cgroup="" Apr 24 14:23:42.220275 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217094 2570 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Apr 24 14:23:42.220275 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217115 2570 flags.go:64] FLAG: --kubelet-cgroups="" Apr 24 14:23:42.220275 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217119 2570 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Apr 24 14:23:42.220275 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217122 2570 flags.go:64] FLAG: --lock-file="" Apr 24 14:23:42.220275 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217124 2570 flags.go:64] FLAG: --log-cadvisor-usage="false" Apr 24 14:23:42.220275 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217127 2570 flags.go:64] FLAG: --log-flush-frequency="5s" Apr 24 14:23:42.220275 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217130 2570 flags.go:64] FLAG: --log-json-info-buffer-size="0" Apr 24 14:23:42.220275 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217136 2570 flags.go:64] FLAG: --log-json-split-stream="false" Apr 24 14:23:42.220275 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217139 2570 flags.go:64] FLAG: --log-text-info-buffer-size="0" Apr 24 14:23:42.220275 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217142 2570 flags.go:64] FLAG: --log-text-split-stream="false" Apr 24 14:23:42.220275 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217145 2570 flags.go:64] FLAG: --logging-format="text" Apr 24 14:23:42.220828 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217148 2570 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Apr 24 14:23:42.220828 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217152 2570 flags.go:64] FLAG: --make-iptables-util-chains="true" Apr 24 14:23:42.220828 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217155 2570 flags.go:64] FLAG: --manifest-url="" Apr 24 14:23:42.220828 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217157 2570 flags.go:64] FLAG: --manifest-url-header="" Apr 24 14:23:42.220828 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217162 2570 flags.go:64] FLAG: --max-housekeeping-interval="15s" Apr 24 14:23:42.220828 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217165 2570 flags.go:64] FLAG: --max-open-files="1000000" Apr 24 14:23:42.220828 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217170 2570 flags.go:64] FLAG: --max-pods="110" Apr 24 14:23:42.220828 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217173 2570 flags.go:64] FLAG: --maximum-dead-containers="-1" Apr 24 14:23:42.220828 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217176 2570 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Apr 24 14:23:42.220828 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217179 2570 flags.go:64] FLAG: --memory-manager-policy="None" Apr 24 14:23:42.220828 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217181 2570 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Apr 24 14:23:42.220828 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217184 2570 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Apr 24 14:23:42.220828 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217187 2570 flags.go:64] FLAG: --node-ip="0.0.0.0" Apr 24 14:23:42.220828 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217191 2570 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhel" Apr 24 14:23:42.220828 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217198 2570 flags.go:64] FLAG: --node-status-max-images="50" Apr 24 14:23:42.220828 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217202 2570 flags.go:64] FLAG: --node-status-update-frequency="10s" Apr 24 14:23:42.220828 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217205 2570 flags.go:64] FLAG: --oom-score-adj="-999" Apr 24 14:23:42.220828 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217208 2570 flags.go:64] FLAG: --pod-cidr="" Apr 24 14:23:42.220828 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217214 2570 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8cfe89231412ff3ee8cb6207fa0be33cad0f08e88c9c0f1e9f7e8c6f14d6715" Apr 24 14:23:42.220828 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217220 2570 flags.go:64] FLAG: --pod-manifest-path="" Apr 24 14:23:42.220828 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217223 2570 flags.go:64] FLAG: --pod-max-pids="-1" Apr 24 14:23:42.220828 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217226 2570 flags.go:64] FLAG: --pods-per-core="0" Apr 24 14:23:42.220828 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217229 2570 flags.go:64] FLAG: --port="10250" Apr 24 14:23:42.220828 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217232 2570 flags.go:64] FLAG: --protect-kernel-defaults="false" Apr 24 14:23:42.221487 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217235 2570 flags.go:64] FLAG: --provider-id="aws:///us-east-1a/i-046a4f8f9f6f6c817" Apr 24 14:23:42.221487 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217238 2570 flags.go:64] FLAG: --qos-reserved="" Apr 24 14:23:42.221487 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217241 2570 flags.go:64] FLAG: --read-only-port="10255" Apr 24 14:23:42.221487 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217244 2570 flags.go:64] FLAG: --register-node="true" Apr 24 14:23:42.221487 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217247 2570 flags.go:64] FLAG: --register-schedulable="true" Apr 24 14:23:42.221487 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217250 2570 flags.go:64] FLAG: --register-with-taints="" Apr 24 14:23:42.221487 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217254 2570 flags.go:64] FLAG: --registry-burst="10" Apr 24 14:23:42.221487 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217257 2570 flags.go:64] FLAG: --registry-qps="5" Apr 24 14:23:42.221487 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217259 2570 flags.go:64] FLAG: --reserved-cpus="" Apr 24 14:23:42.221487 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217263 2570 flags.go:64] FLAG: --reserved-memory="" Apr 24 14:23:42.221487 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217266 2570 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Apr 24 14:23:42.221487 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217269 2570 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Apr 24 14:23:42.221487 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217272 2570 flags.go:64] FLAG: --rotate-certificates="false" Apr 24 14:23:42.221487 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217275 2570 flags.go:64] FLAG: --rotate-server-certificates="false" Apr 24 14:23:42.221487 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217278 2570 flags.go:64] FLAG: --runonce="false" Apr 24 14:23:42.221487 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217281 2570 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Apr 24 14:23:42.221487 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217284 2570 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Apr 24 14:23:42.221487 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217287 2570 flags.go:64] FLAG: --seccomp-default="false" Apr 24 14:23:42.221487 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217290 2570 flags.go:64] FLAG: --serialize-image-pulls="true" Apr 24 14:23:42.221487 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217293 2570 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Apr 24 14:23:42.221487 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217296 2570 flags.go:64] FLAG: --storage-driver-db="cadvisor" Apr 24 14:23:42.221487 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217299 2570 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Apr 24 14:23:42.221487 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217302 2570 flags.go:64] FLAG: --storage-driver-password="root" Apr 24 14:23:42.221487 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217305 2570 flags.go:64] FLAG: --storage-driver-secure="false" Apr 24 14:23:42.221487 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217308 2570 flags.go:64] FLAG: --storage-driver-table="stats" Apr 24 14:23:42.221487 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217311 2570 flags.go:64] FLAG: --storage-driver-user="root" Apr 24 14:23:42.222133 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217314 2570 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Apr 24 14:23:42.222133 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217318 2570 flags.go:64] FLAG: --sync-frequency="1m0s" Apr 24 14:23:42.222133 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217321 2570 flags.go:64] FLAG: --system-cgroups="" Apr 24 14:23:42.222133 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217324 2570 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Apr 24 14:23:42.222133 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217329 2570 flags.go:64] FLAG: --system-reserved-cgroup="" Apr 24 14:23:42.222133 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217332 2570 flags.go:64] FLAG: --tls-cert-file="" Apr 24 14:23:42.222133 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217335 2570 flags.go:64] FLAG: --tls-cipher-suites="[]" Apr 24 14:23:42.222133 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217339 2570 flags.go:64] FLAG: --tls-min-version="" Apr 24 14:23:42.222133 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217342 2570 flags.go:64] FLAG: --tls-private-key-file="" Apr 24 14:23:42.222133 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217344 2570 flags.go:64] FLAG: --topology-manager-policy="none" Apr 24 14:23:42.222133 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217347 2570 flags.go:64] FLAG: --topology-manager-policy-options="" Apr 24 14:23:42.222133 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217350 2570 flags.go:64] FLAG: --topology-manager-scope="container" Apr 24 14:23:42.222133 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217353 2570 flags.go:64] FLAG: --v="2" Apr 24 14:23:42.222133 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217357 2570 flags.go:64] FLAG: --version="false" Apr 24 14:23:42.222133 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217361 2570 flags.go:64] FLAG: --vmodule="" Apr 24 14:23:42.222133 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217366 2570 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Apr 24 14:23:42.222133 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.217369 2570 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Apr 24 14:23:42.222133 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217483 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 24 14:23:42.222133 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217487 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 24 14:23:42.222133 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217490 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 24 14:23:42.222133 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217493 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 24 14:23:42.222133 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217496 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 24 14:23:42.222133 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217499 2570 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 24 14:23:42.222695 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217502 2570 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 24 14:23:42.222695 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217504 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 24 14:23:42.222695 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217507 2570 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 24 14:23:42.222695 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217509 2570 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 24 14:23:42.222695 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217511 2570 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 24 14:23:42.222695 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217514 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 24 14:23:42.222695 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217517 2570 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 24 14:23:42.222695 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217519 2570 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 24 14:23:42.222695 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217521 2570 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 24 14:23:42.222695 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217526 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 24 14:23:42.222695 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217530 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 24 14:23:42.222695 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217533 2570 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 24 14:23:42.222695 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217536 2570 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 24 14:23:42.222695 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217539 2570 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 24 14:23:42.222695 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217541 2570 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 24 14:23:42.222695 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217544 2570 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 24 14:23:42.222695 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217546 2570 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 24 14:23:42.222695 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217549 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 24 14:23:42.222695 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217551 2570 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 24 14:23:42.222695 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217554 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 24 14:23:42.223242 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217556 2570 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 24 14:23:42.223242 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217559 2570 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 24 14:23:42.223242 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217561 2570 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 24 14:23:42.223242 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217564 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 24 14:23:42.223242 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217566 2570 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 24 14:23:42.223242 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217569 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 24 14:23:42.223242 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217571 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 24 14:23:42.223242 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217574 2570 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 24 14:23:42.223242 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217576 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 24 14:23:42.223242 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217579 2570 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 24 14:23:42.223242 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217581 2570 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 24 14:23:42.223242 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217584 2570 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 24 14:23:42.223242 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217586 2570 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 24 14:23:42.223242 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217589 2570 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 24 14:23:42.223242 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217591 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 24 14:23:42.223242 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217594 2570 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 24 14:23:42.223242 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217598 2570 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 24 14:23:42.223242 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217601 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 24 14:23:42.223242 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217604 2570 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 24 14:23:42.223759 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217607 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 24 14:23:42.223759 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217609 2570 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 24 14:23:42.223759 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217614 2570 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 24 14:23:42.223759 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217617 2570 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 24 14:23:42.223759 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217620 2570 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 24 14:23:42.223759 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217622 2570 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 24 14:23:42.223759 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217625 2570 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 24 14:23:42.223759 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217627 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 24 14:23:42.223759 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217630 2570 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 24 14:23:42.223759 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217632 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 24 14:23:42.223759 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217635 2570 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 24 14:23:42.223759 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217637 2570 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 24 14:23:42.223759 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217639 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 24 14:23:42.223759 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217642 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 24 14:23:42.223759 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217645 2570 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 24 14:23:42.223759 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217649 2570 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 24 14:23:42.223759 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217651 2570 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 24 14:23:42.223759 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217654 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 24 14:23:42.223759 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217656 2570 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 24 14:23:42.224258 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217659 2570 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 24 14:23:42.224258 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217661 2570 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 24 14:23:42.224258 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217664 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 24 14:23:42.224258 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217666 2570 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 24 14:23:42.224258 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217669 2570 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 24 14:23:42.224258 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217671 2570 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 24 14:23:42.224258 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217673 2570 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 24 14:23:42.224258 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217676 2570 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 24 14:23:42.224258 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217678 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 24 14:23:42.224258 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217681 2570 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 24 14:23:42.224258 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217683 2570 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 24 14:23:42.224258 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217685 2570 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 24 14:23:42.224258 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217688 2570 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 24 14:23:42.224258 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217690 2570 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 24 14:23:42.224258 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217694 2570 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 24 14:23:42.224258 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217698 2570 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 24 14:23:42.224258 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217701 2570 feature_gate.go:328] unrecognized feature gate: Example Apr 24 14:23:42.224258 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217704 2570 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 24 14:23:42.224258 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217707 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 24 14:23:42.224258 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217710 2570 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 24 14:23:42.224758 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217713 2570 feature_gate.go:328] unrecognized feature gate: Example2 Apr 24 14:23:42.224758 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.217715 2570 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 24 14:23:42.224758 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.218406 2570 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 24 14:23:42.224847 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.224821 2570 server.go:530] "Kubelet version" kubeletVersion="v1.33.9" Apr 24 14:23:42.224847 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.224836 2570 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 14:23:42.224900 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224887 2570 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 24 14:23:42.224900 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224893 2570 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 24 14:23:42.224900 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224898 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 24 14:23:42.224900 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224901 2570 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 24 14:23:42.225038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224904 2570 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 24 14:23:42.225038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224908 2570 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 24 14:23:42.225038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224911 2570 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 24 14:23:42.225038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224914 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 24 14:23:42.225038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224917 2570 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 24 14:23:42.225038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224920 2570 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 24 14:23:42.225038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224923 2570 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 24 14:23:42.225038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224925 2570 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 24 14:23:42.225038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224928 2570 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 24 14:23:42.225038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224931 2570 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 24 14:23:42.225038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224933 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 24 14:23:42.225038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224936 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 24 14:23:42.225038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224938 2570 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 24 14:23:42.225038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224941 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 24 14:23:42.225038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224944 2570 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 24 14:23:42.225038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224947 2570 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 24 14:23:42.225038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224949 2570 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 24 14:23:42.225038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224952 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 24 14:23:42.225038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224955 2570 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 24 14:23:42.225038 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224957 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 24 14:23:42.225548 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224960 2570 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 24 14:23:42.225548 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224968 2570 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 24 14:23:42.225548 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224971 2570 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 24 14:23:42.225548 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224974 2570 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 24 14:23:42.225548 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224976 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 24 14:23:42.225548 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224979 2570 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 24 14:23:42.225548 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224981 2570 feature_gate.go:328] unrecognized feature gate: Example2 Apr 24 14:23:42.225548 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224984 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 24 14:23:42.225548 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224986 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 24 14:23:42.225548 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224989 2570 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 24 14:23:42.225548 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224991 2570 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 24 14:23:42.225548 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224994 2570 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 24 14:23:42.225548 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.224998 2570 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 24 14:23:42.225548 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225000 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 24 14:23:42.225548 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225002 2570 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 24 14:23:42.225548 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225005 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 24 14:23:42.225548 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225007 2570 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 24 14:23:42.225548 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225010 2570 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 24 14:23:42.225548 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225013 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 24 14:23:42.225548 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225015 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 24 14:23:42.226037 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225017 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 24 14:23:42.226037 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225020 2570 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 24 14:23:42.226037 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225022 2570 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 24 14:23:42.226037 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225025 2570 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 24 14:23:42.226037 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225027 2570 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 24 14:23:42.226037 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225030 2570 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 24 14:23:42.226037 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225033 2570 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 24 14:23:42.226037 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225035 2570 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 24 14:23:42.226037 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225037 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 24 14:23:42.226037 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225040 2570 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 24 14:23:42.226037 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225042 2570 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 24 14:23:42.226037 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225045 2570 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 24 14:23:42.226037 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225048 2570 feature_gate.go:328] unrecognized feature gate: Example Apr 24 14:23:42.226037 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225050 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 24 14:23:42.226037 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225059 2570 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 24 14:23:42.226037 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225064 2570 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 24 14:23:42.226037 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225068 2570 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 24 14:23:42.226037 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225071 2570 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 24 14:23:42.226037 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225074 2570 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 24 14:23:42.226514 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225077 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 24 14:23:42.226514 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225080 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 24 14:23:42.226514 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225082 2570 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 24 14:23:42.226514 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225085 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 24 14:23:42.226514 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225087 2570 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 24 14:23:42.226514 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225090 2570 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 24 14:23:42.226514 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225092 2570 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 24 14:23:42.226514 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225095 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 24 14:23:42.226514 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225097 2570 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 24 14:23:42.226514 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225113 2570 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 24 14:23:42.226514 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225116 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 24 14:23:42.226514 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225119 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 24 14:23:42.226514 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225122 2570 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 24 14:23:42.226514 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225124 2570 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 24 14:23:42.226514 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225127 2570 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 24 14:23:42.226514 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225130 2570 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 24 14:23:42.226514 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225132 2570 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 24 14:23:42.226514 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225135 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 24 14:23:42.226514 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225138 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 24 14:23:42.226514 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225141 2570 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 24 14:23:42.226993 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225143 2570 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 24 14:23:42.226993 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225146 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 24 14:23:42.226993 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225148 2570 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 24 14:23:42.226993 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.225153 2570 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 24 14:23:42.226993 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225277 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 24 14:23:42.226993 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225282 2570 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 24 14:23:42.226993 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225285 2570 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 24 14:23:42.226993 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225288 2570 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 24 14:23:42.226993 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225291 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 24 14:23:42.226993 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225294 2570 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 24 14:23:42.226993 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225297 2570 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 24 14:23:42.226993 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225299 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 24 14:23:42.226993 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225302 2570 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 24 14:23:42.226993 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225305 2570 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 24 14:23:42.226993 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225308 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 24 14:23:42.226993 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225311 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 24 14:23:42.227387 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225314 2570 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 24 14:23:42.227387 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225316 2570 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 24 14:23:42.227387 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225319 2570 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 24 14:23:42.227387 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225321 2570 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 24 14:23:42.227387 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225324 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 24 14:23:42.227387 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225326 2570 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 24 14:23:42.227387 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225329 2570 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 24 14:23:42.227387 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225332 2570 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 24 14:23:42.227387 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225334 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 24 14:23:42.227387 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225337 2570 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 24 14:23:42.227387 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225340 2570 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 24 14:23:42.227387 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225342 2570 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 24 14:23:42.227387 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225345 2570 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 24 14:23:42.227387 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225349 2570 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 24 14:23:42.227387 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225352 2570 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 24 14:23:42.227387 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225355 2570 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 24 14:23:42.227387 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225358 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 24 14:23:42.227387 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225360 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 24 14:23:42.227387 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225363 2570 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 24 14:23:42.227844 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225366 2570 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 24 14:23:42.227844 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225369 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 24 14:23:42.227844 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225373 2570 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 24 14:23:42.227844 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225377 2570 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 24 14:23:42.227844 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225379 2570 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 24 14:23:42.227844 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225382 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 24 14:23:42.227844 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225385 2570 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 24 14:23:42.227844 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225387 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 24 14:23:42.227844 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225389 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 24 14:23:42.227844 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225392 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 24 14:23:42.227844 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225395 2570 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 24 14:23:42.227844 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225397 2570 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 24 14:23:42.227844 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225400 2570 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 24 14:23:42.227844 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225402 2570 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 24 14:23:42.227844 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225405 2570 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 24 14:23:42.227844 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225407 2570 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 24 14:23:42.227844 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225410 2570 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 24 14:23:42.227844 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225412 2570 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 24 14:23:42.227844 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225415 2570 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 24 14:23:42.227844 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225417 2570 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 24 14:23:42.228344 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225420 2570 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 24 14:23:42.228344 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225422 2570 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 24 14:23:42.228344 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225425 2570 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 24 14:23:42.228344 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225428 2570 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 24 14:23:42.228344 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225430 2570 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 24 14:23:42.228344 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225433 2570 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 24 14:23:42.228344 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225435 2570 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 24 14:23:42.228344 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225438 2570 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 24 14:23:42.228344 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225440 2570 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 24 14:23:42.228344 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225443 2570 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 24 14:23:42.228344 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225445 2570 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 24 14:23:42.228344 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225447 2570 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 24 14:23:42.228344 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225450 2570 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 24 14:23:42.228344 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225452 2570 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 24 14:23:42.228344 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225455 2570 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 24 14:23:42.228344 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225457 2570 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 24 14:23:42.228344 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225460 2570 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 24 14:23:42.228344 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225462 2570 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 24 14:23:42.228344 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225464 2570 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 24 14:23:42.228344 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225467 2570 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 24 14:23:42.228825 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225470 2570 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 24 14:23:42.228825 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225472 2570 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 24 14:23:42.228825 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225474 2570 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 24 14:23:42.228825 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225477 2570 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 24 14:23:42.228825 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225479 2570 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 24 14:23:42.228825 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225482 2570 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 24 14:23:42.228825 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225484 2570 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 24 14:23:42.228825 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225487 2570 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 24 14:23:42.228825 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225489 2570 feature_gate.go:328] unrecognized feature gate: Example Apr 24 14:23:42.228825 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225491 2570 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 24 14:23:42.228825 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225494 2570 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 24 14:23:42.228825 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225497 2570 feature_gate.go:328] unrecognized feature gate: Example2 Apr 24 14:23:42.228825 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225499 2570 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 24 14:23:42.228825 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225502 2570 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 24 14:23:42.228825 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:42.225505 2570 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 24 14:23:42.228825 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.225509 2570 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 24 14:23:42.229370 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.226245 2570 server.go:962] "Client rotation is on, will bootstrap in background" Apr 24 14:23:42.229934 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.229920 2570 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Apr 24 14:23:42.230947 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.230936 2570 server.go:1019] "Starting client certificate rotation" Apr 24 14:23:42.231047 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.231032 2570 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 24 14:23:42.231079 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.231070 2570 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 24 14:23:42.255109 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.255089 2570 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 24 14:23:42.260116 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.260082 2570 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 24 14:23:42.273557 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.273534 2570 log.go:25] "Validated CRI v1 runtime API" Apr 24 14:23:42.279282 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.279264 2570 log.go:25] "Validated CRI v1 image API" Apr 24 14:23:42.280600 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.280582 2570 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 24 14:23:42.283625 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.283605 2570 fs.go:135] Filesystem UUIDs: map[304e8df9-b5b1-48a5-a5b6-437423d65a33:/dev/nvme0n1p3 3e74d5bd-f8d3-4a4c-858d-956717b68395:/dev/nvme0n1p4 7B77-95E7:/dev/nvme0n1p2] Apr 24 14:23:42.283696 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.283625 2570 fs.go:136] Filesystem partitions: map[/dev/nvme0n1p3:{mountpoint:/boot major:259 minor:3 fsType:ext4 blockSize:0} /dev/nvme0n1p4:{mountpoint:/var major:259 minor:4 fsType:xfs blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Apr 24 14:23:42.286910 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.286889 2570 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 24 14:23:42.288744 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.288640 2570 manager.go:217] Machine: {Timestamp:2026-04-24 14:23:42.287517737 +0000 UTC m=+0.412921544 CPUVendorID:GenuineIntel NumCores:8 NumPhysicalCores:4 NumSockets:1 CpuFrequency:3097615 MemoryCapacity:33164496896 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ec2479a68e6d965c01ede4bad5cf5412 SystemUUID:ec2479a6-8e6d-965c-01ed-e4bad5cf5412 BootID:1cf8a81c-63c8-4cce-93b4-debd96f45990 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16582250496 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/nvme0n1p3 DeviceMajor:259 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16582246400 Type:vfs Inodes:4048400 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6632902656 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/nvme0n1p4 DeviceMajor:259 DeviceMinor:4 Capacity:128243970048 Type:vfs Inodes:62651840 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6103040 Type:vfs Inodes:18446744073709551615 HasInodes:true}] DiskMap:map[259:0:{Name:nvme0n1 Major:259 Minor:0 Size:128849018880 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:02:a5:b1:db:02:bd Speed:0 Mtu:9001} {Name:ens5 MacAddress:02:a5:b1:db:02:bd Speed:0 Mtu:9001} {Name:ovs-system MacAddress:7e:80:ec:e6:fd:dc Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33164496896 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0 4] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:1 Threads:[1 5] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:2 Threads:[2 6] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:3 Threads:[3 7] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:}] Caches:[{Id:0 Size:37486592 Type:Unified Level:3}] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Apr 24 14:23:42.288744 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.288739 2570 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Apr 24 14:23:42.288853 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.288820 2570 manager.go:233] Version: {KernelVersion:5.14.0-570.107.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20260414-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Apr 24 14:23:42.291926 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.291901 2570 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 14:23:42.292064 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.291928 2570 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-10-0-129-34.ec2.internal","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 14:23:42.292128 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.292073 2570 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 14:23:42.292128 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.292082 2570 container_manager_linux.go:306] "Creating device plugin manager" Apr 24 14:23:42.292128 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.292095 2570 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 24 14:23:42.292961 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.292950 2570 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 24 14:23:42.294193 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.294183 2570 state_mem.go:36] "Initialized new in-memory state store" Apr 24 14:23:42.294481 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.294471 2570 server.go:1267] "Using root directory" path="/var/lib/kubelet" Apr 24 14:23:42.297180 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.297170 2570 kubelet.go:491] "Attempting to sync node with API server" Apr 24 14:23:42.297220 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.297188 2570 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 14:23:42.297220 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.297200 2570 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Apr 24 14:23:42.297220 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.297209 2570 kubelet.go:397] "Adding apiserver pod source" Apr 24 14:23:42.297220 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.297217 2570 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 14:23:42.298409 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.298397 2570 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 24 14:23:42.298450 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.298416 2570 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 24 14:23:42.301845 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.301830 2570 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.10-2.rhaos4.20.gita4d0894.el9" apiVersion="v1" Apr 24 14:23:42.303131 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.303097 2570 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 14:23:42.305113 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.305088 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Apr 24 14:23:42.305172 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.305124 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Apr 24 14:23:42.305172 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.305132 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Apr 24 14:23:42.305172 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.305138 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Apr 24 14:23:42.305172 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.305144 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Apr 24 14:23:42.305172 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.305150 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Apr 24 14:23:42.305172 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.305156 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Apr 24 14:23:42.305172 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.305161 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Apr 24 14:23:42.305172 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.305167 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Apr 24 14:23:42.305172 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.305173 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Apr 24 14:23:42.305398 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.305182 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Apr 24 14:23:42.305398 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.305192 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Apr 24 14:23:42.306736 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.306725 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Apr 24 14:23:42.306770 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.306737 2570 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Apr 24 14:23:42.308653 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:42.308628 2570 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-129-34.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 14:23:42.308693 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:42.308648 2570 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 14:23:42.310297 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.310283 2570 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-129-34.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 24 14:23:42.310599 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.310587 2570 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 24 14:23:42.310633 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.310624 2570 server.go:1295] "Started kubelet" Apr 24 14:23:42.310722 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.310699 2570 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 14:23:42.310815 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.310771 2570 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 14:23:42.310873 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.310842 2570 server_v1.go:47] "podresources" method="list" useActivePods=true Apr 24 14:23:42.311484 ip-10-0-129-34 systemd[1]: Started Kubernetes Kubelet. Apr 24 14:23:42.314522 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.314497 2570 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 14:23:42.316728 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.316708 2570 server.go:317] "Adding debug handlers to kubelet server" Apr 24 14:23:42.319027 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.318881 2570 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Apr 24 14:23:42.319456 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.319442 2570 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 14:23:42.320133 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.320116 2570 factory.go:55] Registering systemd factory Apr 24 14:23:42.320216 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.320138 2570 factory.go:223] Registration of the systemd container factory successfully Apr 24 14:23:42.320317 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.320302 2570 factory.go:153] Registering CRI-O factory Apr 24 14:23:42.320377 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.320321 2570 factory.go:223] Registration of the crio container factory successfully Apr 24 14:23:42.320425 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.320384 2570 volume_manager.go:295] "The desired_state_of_world populator starts" Apr 24 14:23:42.320425 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.320392 2570 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Apr 24 14:23:42.320425 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.320384 2570 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 24 14:23:42.320425 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.320401 2570 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 24 14:23:42.320598 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.320435 2570 factory.go:103] Registering Raw factory Apr 24 14:23:42.320598 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.320452 2570 manager.go:1196] Started watching for new ooms in manager Apr 24 14:23:42.320598 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.320538 2570 reconstruct.go:97] "Volume reconstruction finished" Apr 24 14:23:42.320598 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.320546 2570 reconciler.go:26] "Reconciler: start to sync state" Apr 24 14:23:42.320772 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:42.320745 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-34.ec2.internal\" not found" Apr 24 14:23:42.321483 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.321466 2570 manager.go:319] Starting recovery of all containers Apr 24 14:23:42.321648 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:42.321628 2570 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-129-34.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Apr 24 14:23:42.321737 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:42.321720 2570 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 14:23:42.322418 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:42.321420 2570 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-129-34.ec2.internal.18a95107e4432c67 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-129-34.ec2.internal,UID:ip-10-0-129-34.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-10-0-129-34.ec2.internal,},FirstTimestamp:2026-04-24 14:23:42.310599783 +0000 UTC m=+0.436003588,LastTimestamp:2026-04-24 14:23:42.310599783 +0000 UTC m=+0.436003588,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-129-34.ec2.internal,}" Apr 24 14:23:42.323242 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:42.323219 2570 kubelet.go:1618] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Apr 24 14:23:42.326355 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.326317 2570 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-v7mzw" Apr 24 14:23:42.331763 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.331585 2570 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-v7mzw" Apr 24 14:23:42.331848 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.331821 2570 manager.go:324] Recovery completed Apr 24 14:23:42.336389 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.336378 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 24 14:23:42.339569 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.339555 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-34.ec2.internal" event="NodeHasSufficientMemory" Apr 24 14:23:42.339613 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.339590 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-34.ec2.internal" event="NodeHasNoDiskPressure" Apr 24 14:23:42.339613 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.339604 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-34.ec2.internal" event="NodeHasSufficientPID" Apr 24 14:23:42.340146 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.340134 2570 cpu_manager.go:222] "Starting CPU manager" policy="none" Apr 24 14:23:42.340196 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.340147 2570 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Apr 24 14:23:42.340196 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.340164 2570 state_mem.go:36] "Initialized new in-memory state store" Apr 24 14:23:42.341840 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:42.341732 2570 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-129-34.ec2.internal.18a95107e5fd52b7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-129-34.ec2.internal,UID:ip-10-0-129-34.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-129-34.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-129-34.ec2.internal,},FirstTimestamp:2026-04-24 14:23:42.339576503 +0000 UTC m=+0.464980308,LastTimestamp:2026-04-24 14:23:42.339576503 +0000 UTC m=+0.464980308,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-129-34.ec2.internal,}" Apr 24 14:23:42.342465 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.342454 2570 policy_none.go:49] "None policy: Start" Apr 24 14:23:42.342513 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.342469 2570 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 24 14:23:42.342513 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.342488 2570 state_mem.go:35] "Initializing new in-memory state store" Apr 24 14:23:42.388855 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.383907 2570 manager.go:341] "Starting Device Plugin manager" Apr 24 14:23:42.388855 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:42.383940 2570 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 14:23:42.388855 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.383949 2570 server.go:85] "Starting device plugin registration server" Apr 24 14:23:42.388855 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.384292 2570 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 14:23:42.388855 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.384306 2570 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 14:23:42.388855 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.384444 2570 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Apr 24 14:23:42.388855 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.384555 2570 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Apr 24 14:23:42.388855 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.384563 2570 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 14:23:42.388855 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:42.385159 2570 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Apr 24 14:23:42.388855 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:42.385198 2570 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-129-34.ec2.internal\" not found" Apr 24 14:23:42.422774 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.422749 2570 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 24 14:23:42.423947 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.423925 2570 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 24 14:23:42.424048 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.423954 2570 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 24 14:23:42.424048 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.423973 2570 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 14:23:42.424048 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.423983 2570 kubelet.go:2451] "Starting kubelet main sync loop" Apr 24 14:23:42.424048 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:42.424020 2570 kubelet.go:2475] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Apr 24 14:23:42.426535 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.426518 2570 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 24 14:23:42.485081 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.485010 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 24 14:23:42.485954 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.485939 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-34.ec2.internal" event="NodeHasSufficientMemory" Apr 24 14:23:42.486020 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.485970 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-34.ec2.internal" event="NodeHasNoDiskPressure" Apr 24 14:23:42.486020 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.485981 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-34.ec2.internal" event="NodeHasSufficientPID" Apr 24 14:23:42.486020 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.486004 2570 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-129-34.ec2.internal" Apr 24 14:23:42.493355 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.493338 2570 kubelet_node_status.go:81] "Successfully registered node" node="ip-10-0-129-34.ec2.internal" Apr 24 14:23:42.493400 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:42.493359 2570 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-129-34.ec2.internal\": node \"ip-10-0-129-34.ec2.internal\" not found" Apr 24 14:23:42.512765 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:42.512740 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-34.ec2.internal\" not found" Apr 24 14:23:42.525109 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.525075 2570 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-34.ec2.internal","kube-system/kube-apiserver-proxy-ip-10-0-129-34.ec2.internal"] Apr 24 14:23:42.525183 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.525173 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 24 14:23:42.525913 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.525898 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-34.ec2.internal" event="NodeHasSufficientMemory" Apr 24 14:23:42.525981 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.525929 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-34.ec2.internal" event="NodeHasNoDiskPressure" Apr 24 14:23:42.525981 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.525943 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-34.ec2.internal" event="NodeHasSufficientPID" Apr 24 14:23:42.527206 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.527194 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 24 14:23:42.527366 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.527341 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-34.ec2.internal" Apr 24 14:23:42.527417 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.527379 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 24 14:23:42.527860 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.527847 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-34.ec2.internal" event="NodeHasSufficientMemory" Apr 24 14:23:42.527923 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.527871 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-34.ec2.internal" event="NodeHasNoDiskPressure" Apr 24 14:23:42.527923 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.527885 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-34.ec2.internal" event="NodeHasSufficientPID" Apr 24 14:23:42.527999 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.527958 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-34.ec2.internal" event="NodeHasSufficientMemory" Apr 24 14:23:42.527999 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.527990 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-34.ec2.internal" event="NodeHasNoDiskPressure" Apr 24 14:23:42.528057 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.528001 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-34.ec2.internal" event="NodeHasSufficientPID" Apr 24 14:23:42.529031 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.529018 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-129-34.ec2.internal" Apr 24 14:23:42.529096 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.529041 2570 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 24 14:23:42.529672 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.529660 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-34.ec2.internal" event="NodeHasSufficientMemory" Apr 24 14:23:42.529724 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.529687 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-34.ec2.internal" event="NodeHasNoDiskPressure" Apr 24 14:23:42.529724 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.529699 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-34.ec2.internal" event="NodeHasSufficientPID" Apr 24 14:23:42.554761 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:42.554739 2570 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-129-34.ec2.internal\" not found" node="ip-10-0-129-34.ec2.internal" Apr 24 14:23:42.558096 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:42.558081 2570 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-129-34.ec2.internal\" not found" node="ip-10-0-129-34.ec2.internal" Apr 24 14:23:42.612867 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:42.612845 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-34.ec2.internal\" not found" Apr 24 14:23:42.622233 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.622208 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/84b73010dde18c2e537db575f397d1b5-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-129-34.ec2.internal\" (UID: \"84b73010dde18c2e537db575f397d1b5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-34.ec2.internal" Apr 24 14:23:42.622233 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.622236 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/84b73010dde18c2e537db575f397d1b5-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-129-34.ec2.internal\" (UID: \"84b73010dde18c2e537db575f397d1b5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-34.ec2.internal" Apr 24 14:23:42.622373 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.622254 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/52f7ef5b748605fa2e3167b9e181ddfa-config\") pod \"kube-apiserver-proxy-ip-10-0-129-34.ec2.internal\" (UID: \"52f7ef5b748605fa2e3167b9e181ddfa\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-129-34.ec2.internal" Apr 24 14:23:42.713365 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:42.713340 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-34.ec2.internal\" not found" Apr 24 14:23:42.722703 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.722687 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/84b73010dde18c2e537db575f397d1b5-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-129-34.ec2.internal\" (UID: \"84b73010dde18c2e537db575f397d1b5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-34.ec2.internal" Apr 24 14:23:42.722754 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.722712 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/84b73010dde18c2e537db575f397d1b5-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-129-34.ec2.internal\" (UID: \"84b73010dde18c2e537db575f397d1b5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-34.ec2.internal" Apr 24 14:23:42.722754 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.722730 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/52f7ef5b748605fa2e3167b9e181ddfa-config\") pod \"kube-apiserver-proxy-ip-10-0-129-34.ec2.internal\" (UID: \"52f7ef5b748605fa2e3167b9e181ddfa\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-129-34.ec2.internal" Apr 24 14:23:42.722817 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.722779 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/52f7ef5b748605fa2e3167b9e181ddfa-config\") pod \"kube-apiserver-proxy-ip-10-0-129-34.ec2.internal\" (UID: \"52f7ef5b748605fa2e3167b9e181ddfa\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-129-34.ec2.internal" Apr 24 14:23:42.722817 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.722781 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/84b73010dde18c2e537db575f397d1b5-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-129-34.ec2.internal\" (UID: \"84b73010dde18c2e537db575f397d1b5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-34.ec2.internal" Apr 24 14:23:42.722817 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.722796 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/84b73010dde18c2e537db575f397d1b5-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-129-34.ec2.internal\" (UID: \"84b73010dde18c2e537db575f397d1b5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-34.ec2.internal" Apr 24 14:23:42.814083 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:42.814018 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-34.ec2.internal\" not found" Apr 24 14:23:42.856538 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.856521 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-34.ec2.internal" Apr 24 14:23:42.861123 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:42.861091 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-129-34.ec2.internal" Apr 24 14:23:42.914895 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:42.914866 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-34.ec2.internal\" not found" Apr 24 14:23:43.015403 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:43.015382 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-34.ec2.internal\" not found" Apr 24 14:23:43.115909 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:43.115839 2570 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-34.ec2.internal\" not found" Apr 24 14:23:43.196437 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.196415 2570 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Apr 24 14:23:43.219965 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.219942 2570 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-34.ec2.internal" Apr 24 14:23:43.230723 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.230707 2570 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Apr 24 14:23:43.230825 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.230808 2570 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Apr 24 14:23:43.230874 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.230848 2570 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Apr 24 14:23:43.230874 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:43.230863 2570 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://acde13fdfad1246a2b32627736c53cc4-e89cd257566aeea3.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/openshift-machine-config-operator/pods\": read tcp 10.0.129.34:41544->52.3.62.7:6443: use of closed network connection" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-34.ec2.internal" Apr 24 14:23:43.230936 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.230881 2570 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-129-34.ec2.internal" Apr 24 14:23:43.250176 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.250160 2570 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 24 14:23:43.297805 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.297778 2570 apiserver.go:52] "Watching apiserver" Apr 24 14:23:43.306905 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.306883 2570 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Apr 24 14:23:43.308371 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.308351 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-ld8rd","openshift-network-operator/iptables-alerter-lfbtg","kube-system/konnectivity-agent-dlcwf","openshift-cluster-node-tuning-operator/tuned-qd58c","openshift-dns/node-resolver-cmhmr","openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-34.ec2.internal","openshift-multus/multus-fn8gq","openshift-network-diagnostics/network-check-target-7b299","openshift-ovn-kubernetes/ovnkube-node-dz989","kube-system/kube-apiserver-proxy-ip-10-0-129-34.ec2.internal","openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-qjqv8","openshift-image-registry/node-ca-k7xj4","openshift-multus/multus-additional-cni-plugins-jvbnt"] Apr 24 14:23:43.310886 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.310864 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.313124 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.313089 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-lfbtg" Apr 24 14:23:43.313238 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.313222 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-dlcwf" Apr 24 14:23:43.313869 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.313846 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Apr 24 14:23:43.313869 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.313869 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Apr 24 14:23:43.314016 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.313889 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Apr 24 14:23:43.314016 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.313923 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-tjq9f\"" Apr 24 14:23:43.314136 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.314123 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Apr 24 14:23:43.314658 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.314640 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.315651 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.315635 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-cmhmr" Apr 24 14:23:43.316280 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.316257 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Apr 24 14:23:43.316407 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.316394 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kube-system\"/\"konnectivity-ca-bundle\"" Apr 24 14:23:43.316465 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.316436 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"konnectivity-agent\"" Apr 24 14:23:43.316465 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.316441 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Apr 24 14:23:43.316567 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.316396 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Apr 24 14:23:43.316567 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.316468 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"default-dockercfg-wmrp2\"" Apr 24 14:23:43.316567 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.316509 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-dockercfg-kv8sp\"" Apr 24 14:23:43.316789 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.316776 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ld8rd" Apr 24 14:23:43.316907 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:43.316866 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ld8rd" podUID="62277dce-4b78-4158-9951-1292c0fa443c" Apr 24 14:23:43.317174 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.317158 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"openshift-service-ca.crt\"" Apr 24 14:23:43.317246 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.317181 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"tuned-dockercfg-mbtvc\"" Apr 24 14:23:43.317298 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.317247 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"kube-root-ca.crt\"" Apr 24 14:23:43.317860 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.317840 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7b299" Apr 24 14:23:43.317943 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:43.317905 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7b299" podUID="cb704828-9d72-448f-8256-1dda6f6273ea" Apr 24 14:23:43.318065 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.318052 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Apr 24 14:23:43.318147 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.318076 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Apr 24 14:23:43.318817 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.318802 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-hr775\"" Apr 24 14:23:43.318950 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.318938 2570 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Apr 24 14:23:43.319177 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.319163 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.320437 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.320420 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-qjqv8" Apr 24 14:23:43.321706 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.321688 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-k7xj4" Apr 24 14:23:43.322011 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.321926 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Apr 24 14:23:43.322011 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.321943 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Apr 24 14:23:43.322011 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.321926 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l5f74\"" Apr 24 14:23:43.322189 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.322177 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Apr 24 14:23:43.322241 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.322227 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Apr 24 14:23:43.322289 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.322263 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Apr 24 14:23:43.322341 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.322288 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Apr 24 14:23:43.322828 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.322815 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-metrics-serving-cert\"" Apr 24 14:23:43.322906 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.322891 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-sa-dockercfg-66tkf\"" Apr 24 14:23:43.323095 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.323083 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"openshift-service-ca.crt\"" Apr 24 14:23:43.323166 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.323150 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jvbnt" Apr 24 14:23:43.323263 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.323249 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"kube-root-ca.crt\"" Apr 24 14:23:43.324138 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.324123 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Apr 24 14:23:43.324210 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.324168 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Apr 24 14:23:43.324440 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.324424 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-szvkl\"" Apr 24 14:23:43.324693 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.324674 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Apr 24 14:23:43.325662 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.325644 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-multus-cni-dir\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.325749 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.325668 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-host-run-k8s-cni-cncf-io\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.325749 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.325685 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-multus-conf-dir\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.325749 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.325699 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/b63a7014-b666-4162-b36c-f215db9ea517-konnectivity-ca\") pod \"konnectivity-agent-dlcwf\" (UID: \"b63a7014-b666-4162-b36c-f215db9ea517\") " pod="kube-system/konnectivity-agent-dlcwf" Apr 24 14:23:43.325749 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.325715 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2e8d53e8-f5d3-4863-b7e1-8141078a84b3-serviceca\") pod \"node-ca-k7xj4\" (UID: \"2e8d53e8-f5d3-4863-b7e1-8141078a84b3\") " pod="openshift-image-registry/node-ca-k7xj4" Apr 24 14:23:43.325749 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.325729 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-system-cni-dir\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.326002 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.325784 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-host-kubelet\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.326002 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.325836 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-host-run-netns\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.326002 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.325871 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-host-cni-bin\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.326002 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.325886 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Apr 24 14:23:43.326002 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.325873 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-mjlgd\"" Apr 24 14:23:43.326002 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.325902 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-os-release\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.326002 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.325955 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5htmq\" (UniqueName: \"kubernetes.io/projected/1b6507b4-e71a-44e1-8d03-18abcb3b225d-kube-api-access-5htmq\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.326002 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.325980 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/62277dce-4b78-4158-9951-1292c0fa443c-metrics-certs\") pod \"network-metrics-daemon-ld8rd\" (UID: \"62277dce-4b78-4158-9951-1292c0fa443c\") " pod="openshift-multus/network-metrics-daemon-ld8rd" Apr 24 14:23:43.326333 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.326319 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Apr 24 14:23:43.326507 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.326491 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-var-lib-openvswitch\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.326540 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.326522 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d852814e-e573-4e8b-b69a-d17116e07af7-etc-kubernetes\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.326595 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.326541 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/d852814e-e573-4e8b-b69a-d17116e07af7-etc-systemd\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.326595 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.326573 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1b6507b4-e71a-44e1-8d03-18abcb3b225d-cni-binary-copy\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.326668 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.326614 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-host-var-lib-cni-bin\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.326668 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.326651 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-host-var-lib-kubelet\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.326726 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.326676 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8vdg\" (UniqueName: \"kubernetes.io/projected/2e8d53e8-f5d3-4863-b7e1-8141078a84b3-kube-api-access-c8vdg\") pod \"node-ca-k7xj4\" (UID: \"2e8d53e8-f5d3-4863-b7e1-8141078a84b3\") " pod="openshift-image-registry/node-ca-k7xj4" Apr 24 14:23:43.326726 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.326699 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d852814e-e573-4e8b-b69a-d17116e07af7-sys\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.326785 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.326725 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363-registration-dir\") pod \"aws-ebs-csi-driver-node-qjqv8\" (UID: \"e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-qjqv8" Apr 24 14:23:43.326785 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.326749 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363-device-dir\") pod \"aws-ebs-csi-driver-node-qjqv8\" (UID: \"e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-qjqv8" Apr 24 14:23:43.326785 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.326770 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-etc-kubernetes\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.326894 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.326822 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-host-run-ovn-kubernetes\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.326894 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.326843 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/090e3afb-c111-4bf0-a107-0156c2f3a0f2-env-overrides\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.326894 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.326859 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/d852814e-e573-4e8b-b69a-d17116e07af7-etc-sysctl-d\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.326894 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.326880 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7966ddd8-be1a-45a0-8020-2cd96b2fd595-tmp-dir\") pod \"node-resolver-cmhmr\" (UID: \"7966ddd8-be1a-45a0-8020-2cd96b2fd595\") " pod="openshift-dns/node-resolver-cmhmr" Apr 24 14:23:43.327065 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.326896 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q276h\" (UniqueName: \"kubernetes.io/projected/2bb3748f-64b2-4249-91e8-54ba5dd9c145-kube-api-access-q276h\") pod \"iptables-alerter-lfbtg\" (UID: \"2bb3748f-64b2-4249-91e8-54ba5dd9c145\") " pod="openshift-network-operator/iptables-alerter-lfbtg" Apr 24 14:23:43.327065 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.326911 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv6h8\" (UniqueName: \"kubernetes.io/projected/62277dce-4b78-4158-9951-1292c0fa443c-kube-api-access-cv6h8\") pod \"network-metrics-daemon-ld8rd\" (UID: \"62277dce-4b78-4158-9951-1292c0fa443c\") " pod="openshift-multus/network-metrics-daemon-ld8rd" Apr 24 14:23:43.327065 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.326924 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/090e3afb-c111-4bf0-a107-0156c2f3a0f2-ovnkube-config\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.327065 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.326938 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/090e3afb-c111-4bf0-a107-0156c2f3a0f2-ovnkube-script-lib\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.327065 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.326950 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2e8d53e8-f5d3-4863-b7e1-8141078a84b3-host\") pod \"node-ca-k7xj4\" (UID: \"2e8d53e8-f5d3-4863-b7e1-8141078a84b3\") " pod="openshift-image-registry/node-ca-k7xj4" Apr 24 14:23:43.327065 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.326974 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/d852814e-e573-4e8b-b69a-d17116e07af7-etc-sysconfig\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.327065 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.326995 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d852814e-e573-4e8b-b69a-d17116e07af7-lib-modules\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.327065 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327009 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-multus-socket-dir-parent\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.327065 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327023 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/b63a7014-b666-4162-b36c-f215db9ea517-agent-certs\") pod \"konnectivity-agent-dlcwf\" (UID: \"b63a7014-b666-4162-b36c-f215db9ea517\") " pod="kube-system/konnectivity-agent-dlcwf" Apr 24 14:23:43.327065 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327047 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/090e3afb-c111-4bf0-a107-0156c2f3a0f2-ovn-node-metrics-cert\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.327065 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327060 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d852814e-e573-4e8b-b69a-d17116e07af7-host\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.327517 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327074 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cckcl\" (UniqueName: \"kubernetes.io/projected/7966ddd8-be1a-45a0-8020-2cd96b2fd595-kube-api-access-cckcl\") pod \"node-resolver-cmhmr\" (UID: \"7966ddd8-be1a-45a0-8020-2cd96b2fd595\") " pod="openshift-dns/node-resolver-cmhmr" Apr 24 14:23:43.327517 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327090 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68rcm\" (UniqueName: \"kubernetes.io/projected/cb704828-9d72-448f-8256-1dda6f6273ea-kube-api-access-68rcm\") pod \"network-check-target-7b299\" (UID: \"cb704828-9d72-448f-8256-1dda6f6273ea\") " pod="openshift-network-diagnostics/network-check-target-7b299" Apr 24 14:23:43.327517 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327124 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1b6507b4-e71a-44e1-8d03-18abcb3b225d-multus-daemon-config\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.327517 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327152 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-host-run-multus-certs\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.327517 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327174 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/2bb3748f-64b2-4249-91e8-54ba5dd9c145-iptables-alerter-script\") pod \"iptables-alerter-lfbtg\" (UID: \"2bb3748f-64b2-4249-91e8-54ba5dd9c145\") " pod="openshift-network-operator/iptables-alerter-lfbtg" Apr 24 14:23:43.327517 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327189 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-run-openvswitch\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.327517 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327218 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-log-socket\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.327517 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327242 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gg7k\" (UniqueName: \"kubernetes.io/projected/e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363-kube-api-access-8gg7k\") pod \"aws-ebs-csi-driver-node-qjqv8\" (UID: \"e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-qjqv8" Apr 24 14:23:43.327517 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327257 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/d852814e-e573-4e8b-b69a-d17116e07af7-etc-modprobe-d\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.327517 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327270 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-host-slash\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.327517 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327307 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/d852814e-e573-4e8b-b69a-d17116e07af7-etc-tuned\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.327517 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327322 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363-etc-selinux\") pod \"aws-ebs-csi-driver-node-qjqv8\" (UID: \"e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-qjqv8" Apr 24 14:23:43.327517 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327339 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-systemd-units\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.327517 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327367 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-run-systemd\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.327517 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327397 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.327517 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327416 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363-socket-dir\") pod \"aws-ebs-csi-driver-node-qjqv8\" (UID: \"e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-qjqv8" Apr 24 14:23:43.328069 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327449 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-cnibin\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.328069 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327464 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2bb3748f-64b2-4249-91e8-54ba5dd9c145-host-slash\") pod \"iptables-alerter-lfbtg\" (UID: \"2bb3748f-64b2-4249-91e8-54ba5dd9c145\") " pod="openshift-network-operator/iptables-alerter-lfbtg" Apr 24 14:23:43.328069 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327479 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/d852814e-e573-4e8b-b69a-d17116e07af7-etc-sysctl-conf\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.328069 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327495 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d852814e-e573-4e8b-b69a-d17116e07af7-var-lib-kubelet\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.328069 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327517 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363-sys-fs\") pod \"aws-ebs-csi-driver-node-qjqv8\" (UID: \"e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-qjqv8" Apr 24 14:23:43.328069 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327546 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d852814e-e573-4e8b-b69a-d17116e07af7-tmp\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.328069 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327567 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkf44\" (UniqueName: \"kubernetes.io/projected/d852814e-e573-4e8b-b69a-d17116e07af7-kube-api-access-jkf44\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.328069 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327582 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-etc-openvswitch\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.328069 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327597 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-run-ovn\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.328069 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327622 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-node-log\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.328069 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327643 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-host-cni-netd\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.328069 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327657 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j45d\" (UniqueName: \"kubernetes.io/projected/090e3afb-c111-4bf0-a107-0156c2f3a0f2-kube-api-access-4j45d\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.328069 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327677 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d852814e-e573-4e8b-b69a-d17116e07af7-run\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.328069 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327692 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363-kubelet-dir\") pod \"aws-ebs-csi-driver-node-qjqv8\" (UID: \"e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-qjqv8" Apr 24 14:23:43.328069 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327705 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-host-run-netns\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.328069 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327723 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-hostroot\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.328069 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327746 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7966ddd8-be1a-45a0-8020-2cd96b2fd595-hosts-file\") pod \"node-resolver-cmhmr\" (UID: \"7966ddd8-be1a-45a0-8020-2cd96b2fd595\") " pod="openshift-dns/node-resolver-cmhmr" Apr 24 14:23:43.328640 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.327769 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-host-var-lib-cni-multus\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.333551 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.333503 2570 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2028-04-23 14:18:42 +0000 UTC" deadline="2027-09-24 12:05:34.161995055 +0000 UTC" Apr 24 14:23:43.333635 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.333550 2570 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="12429h41m50.828447264s" Apr 24 14:23:43.334857 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.334839 2570 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 24 14:23:43.346125 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:43.346083 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84b73010dde18c2e537db575f397d1b5.slice/crio-11a7f1c412d499067c2a2b78b267e3a17de05a29b6b790b57c7af23ae55a8923 WatchSource:0}: Error finding container 11a7f1c412d499067c2a2b78b267e3a17de05a29b6b790b57c7af23ae55a8923: Status 404 returned error can't find the container with id 11a7f1c412d499067c2a2b78b267e3a17de05a29b6b790b57c7af23ae55a8923 Apr 24 14:23:43.350968 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.350955 2570 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 24 14:23:43.355824 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.355805 2570 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-bk6qg" Apr 24 14:23:43.363244 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.363219 2570 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-bk6qg" Apr 24 14:23:43.421274 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.421244 2570 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 24 14:23:43.426731 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.426686 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-34.ec2.internal" event={"ID":"84b73010dde18c2e537db575f397d1b5","Type":"ContainerStarted","Data":"11a7f1c412d499067c2a2b78b267e3a17de05a29b6b790b57c7af23ae55a8923"} Apr 24 14:23:43.427907 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.427884 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0d6bd978-a62b-4e69-9786-a9b7774d09db-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jvbnt\" (UID: \"0d6bd978-a62b-4e69-9786-a9b7774d09db\") " pod="openshift-multus/multus-additional-cni-plugins-jvbnt" Apr 24 14:23:43.428049 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.427913 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-os-release\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.428049 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.427930 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5htmq\" (UniqueName: \"kubernetes.io/projected/1b6507b4-e71a-44e1-8d03-18abcb3b225d-kube-api-access-5htmq\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.428049 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.427950 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/62277dce-4b78-4158-9951-1292c0fa443c-metrics-certs\") pod \"network-metrics-daemon-ld8rd\" (UID: \"62277dce-4b78-4158-9951-1292c0fa443c\") " pod="openshift-multus/network-metrics-daemon-ld8rd" Apr 24 14:23:43.428049 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.427976 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-var-lib-openvswitch\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.428049 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.427999 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d852814e-e573-4e8b-b69a-d17116e07af7-etc-kubernetes\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.428049 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.428022 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-os-release\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.428049 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.428024 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0d6bd978-a62b-4e69-9786-a9b7774d09db-cni-binary-copy\") pod \"multus-additional-cni-plugins-jvbnt\" (UID: \"0d6bd978-a62b-4e69-9786-a9b7774d09db\") " pod="openshift-multus/multus-additional-cni-plugins-jvbnt" Apr 24 14:23:43.428404 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.428075 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-var-lib-openvswitch\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.428513 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.428441 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/d852814e-e573-4e8b-b69a-d17116e07af7-etc-systemd\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.428566 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.428512 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1b6507b4-e71a-44e1-8d03-18abcb3b225d-cni-binary-copy\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.428566 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:43.428199 2570 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 14:23:43.428566 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.428535 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/d852814e-e573-4e8b-b69a-d17116e07af7-etc-systemd\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.428566 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.428544 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-host-var-lib-cni-bin\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.428743 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.428590 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-host-var-lib-cni-bin\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.428743 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.428595 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-host-var-lib-kubelet\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.428743 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:43.428648 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62277dce-4b78-4158-9951-1292c0fa443c-metrics-certs podName:62277dce-4b78-4158-9951-1292c0fa443c nodeName:}" failed. No retries permitted until 2026-04-24 14:23:43.928586568 +0000 UTC m=+2.053990381 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/62277dce-4b78-4158-9951-1292c0fa443c-metrics-certs") pod "network-metrics-daemon-ld8rd" (UID: "62277dce-4b78-4158-9951-1292c0fa443c") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 14:23:43.428743 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.428645 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-host-var-lib-kubelet\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.428743 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.428707 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c8vdg\" (UniqueName: \"kubernetes.io/projected/2e8d53e8-f5d3-4863-b7e1-8141078a84b3-kube-api-access-c8vdg\") pod \"node-ca-k7xj4\" (UID: \"2e8d53e8-f5d3-4863-b7e1-8141078a84b3\") " pod="openshift-image-registry/node-ca-k7xj4" Apr 24 14:23:43.428743 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.428724 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d852814e-e573-4e8b-b69a-d17116e07af7-etc-kubernetes\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.429023 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.428842 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d852814e-e573-4e8b-b69a-d17116e07af7-sys\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.429023 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.428892 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363-registration-dir\") pod \"aws-ebs-csi-driver-node-qjqv8\" (UID: \"e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-qjqv8" Apr 24 14:23:43.429023 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.428926 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363-device-dir\") pod \"aws-ebs-csi-driver-node-qjqv8\" (UID: \"e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-qjqv8" Apr 24 14:23:43.429023 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.428959 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-etc-kubernetes\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.429023 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.428992 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-host-run-ovn-kubernetes\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.429023 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.429017 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363-registration-dir\") pod \"aws-ebs-csi-driver-node-qjqv8\" (UID: \"e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-qjqv8" Apr 24 14:23:43.429320 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.429028 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d852814e-e573-4e8b-b69a-d17116e07af7-sys\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.429320 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.429038 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/090e3afb-c111-4bf0-a107-0156c2f3a0f2-env-overrides\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.429320 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.429084 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-etc-kubernetes\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.429320 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.429095 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-host-run-ovn-kubernetes\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.429320 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.429136 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/d852814e-e573-4e8b-b69a-d17116e07af7-etc-sysctl-d\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.429320 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.429172 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7966ddd8-be1a-45a0-8020-2cd96b2fd595-tmp-dir\") pod \"node-resolver-cmhmr\" (UID: \"7966ddd8-be1a-45a0-8020-2cd96b2fd595\") " pod="openshift-dns/node-resolver-cmhmr" Apr 24 14:23:43.429320 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.429180 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363-device-dir\") pod \"aws-ebs-csi-driver-node-qjqv8\" (UID: \"e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-qjqv8" Apr 24 14:23:43.429320 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.429204 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/0d6bd978-a62b-4e69-9786-a9b7774d09db-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-jvbnt\" (UID: \"0d6bd978-a62b-4e69-9786-a9b7774d09db\") " pod="openshift-multus/multus-additional-cni-plugins-jvbnt" Apr 24 14:23:43.429320 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.429242 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q276h\" (UniqueName: \"kubernetes.io/projected/2bb3748f-64b2-4249-91e8-54ba5dd9c145-kube-api-access-q276h\") pod \"iptables-alerter-lfbtg\" (UID: \"2bb3748f-64b2-4249-91e8-54ba5dd9c145\") " pod="openshift-network-operator/iptables-alerter-lfbtg" Apr 24 14:23:43.429320 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.429278 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cv6h8\" (UniqueName: \"kubernetes.io/projected/62277dce-4b78-4158-9951-1292c0fa443c-kube-api-access-cv6h8\") pod \"network-metrics-daemon-ld8rd\" (UID: \"62277dce-4b78-4158-9951-1292c0fa443c\") " pod="openshift-multus/network-metrics-daemon-ld8rd" Apr 24 14:23:43.429320 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.429302 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/d852814e-e573-4e8b-b69a-d17116e07af7-etc-sysctl-d\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.429320 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.429313 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/090e3afb-c111-4bf0-a107-0156c2f3a0f2-ovnkube-config\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.429889 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.429348 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1b6507b4-e71a-44e1-8d03-18abcb3b225d-cni-binary-copy\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.429889 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.429360 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/090e3afb-c111-4bf0-a107-0156c2f3a0f2-ovnkube-script-lib\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.429889 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.429394 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2e8d53e8-f5d3-4863-b7e1-8141078a84b3-host\") pod \"node-ca-k7xj4\" (UID: \"2e8d53e8-f5d3-4863-b7e1-8141078a84b3\") " pod="openshift-image-registry/node-ca-k7xj4" Apr 24 14:23:43.429889 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.429425 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/d852814e-e573-4e8b-b69a-d17116e07af7-etc-sysconfig\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.429889 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.429484 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d852814e-e573-4e8b-b69a-d17116e07af7-lib-modules\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.429889 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.429523 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0d6bd978-a62b-4e69-9786-a9b7774d09db-os-release\") pod \"multus-additional-cni-plugins-jvbnt\" (UID: \"0d6bd978-a62b-4e69-9786-a9b7774d09db\") " pod="openshift-multus/multus-additional-cni-plugins-jvbnt" Apr 24 14:23:43.429889 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.429585 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2e8d53e8-f5d3-4863-b7e1-8141078a84b3-host\") pod \"node-ca-k7xj4\" (UID: \"2e8d53e8-f5d3-4863-b7e1-8141078a84b3\") " pod="openshift-image-registry/node-ca-k7xj4" Apr 24 14:23:43.429889 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.429486 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/d852814e-e573-4e8b-b69a-d17116e07af7-etc-sysconfig\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.429889 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.429561 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-multus-socket-dir-parent\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.429889 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.429523 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/090e3afb-c111-4bf0-a107-0156c2f3a0f2-env-overrides\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.429889 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.429675 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-multus-socket-dir-parent\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.429889 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.429762 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7966ddd8-be1a-45a0-8020-2cd96b2fd595-tmp-dir\") pod \"node-resolver-cmhmr\" (UID: \"7966ddd8-be1a-45a0-8020-2cd96b2fd595\") " pod="openshift-dns/node-resolver-cmhmr" Apr 24 14:23:43.429889 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.429891 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/b63a7014-b666-4162-b36c-f215db9ea517-agent-certs\") pod \"konnectivity-agent-dlcwf\" (UID: \"b63a7014-b666-4162-b36c-f215db9ea517\") " pod="kube-system/konnectivity-agent-dlcwf" Apr 24 14:23:43.430500 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.429922 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/090e3afb-c111-4bf0-a107-0156c2f3a0f2-ovn-node-metrics-cert\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.430500 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.429943 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d852814e-e573-4e8b-b69a-d17116e07af7-lib-modules\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.430500 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.429952 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d852814e-e573-4e8b-b69a-d17116e07af7-host\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.430500 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.430003 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cckcl\" (UniqueName: \"kubernetes.io/projected/7966ddd8-be1a-45a0-8020-2cd96b2fd595-kube-api-access-cckcl\") pod \"node-resolver-cmhmr\" (UID: \"7966ddd8-be1a-45a0-8020-2cd96b2fd595\") " pod="openshift-dns/node-resolver-cmhmr" Apr 24 14:23:43.430500 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.430037 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-68rcm\" (UniqueName: \"kubernetes.io/projected/cb704828-9d72-448f-8256-1dda6f6273ea-kube-api-access-68rcm\") pod \"network-check-target-7b299\" (UID: \"cb704828-9d72-448f-8256-1dda6f6273ea\") " pod="openshift-network-diagnostics/network-check-target-7b299" Apr 24 14:23:43.430500 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.430052 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/090e3afb-c111-4bf0-a107-0156c2f3a0f2-ovnkube-script-lib\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.430500 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.430063 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1b6507b4-e71a-44e1-8d03-18abcb3b225d-multus-daemon-config\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.430500 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.430094 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-host-run-multus-certs\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.430500 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.430167 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-host-run-multus-certs\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.430500 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.430167 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/090e3afb-c111-4bf0-a107-0156c2f3a0f2-ovnkube-config\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.430500 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.430312 2570 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Apr 24 14:23:43.430500 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.430353 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/2bb3748f-64b2-4249-91e8-54ba5dd9c145-iptables-alerter-script\") pod \"iptables-alerter-lfbtg\" (UID: \"2bb3748f-64b2-4249-91e8-54ba5dd9c145\") " pod="openshift-network-operator/iptables-alerter-lfbtg" Apr 24 14:23:43.430500 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.430397 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-run-openvswitch\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.430500 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.430442 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-log-socket\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.430500 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.430476 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8gg7k\" (UniqueName: \"kubernetes.io/projected/e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363-kube-api-access-8gg7k\") pod \"aws-ebs-csi-driver-node-qjqv8\" (UID: \"e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-qjqv8" Apr 24 14:23:43.431170 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.430512 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/d852814e-e573-4e8b-b69a-d17116e07af7-etc-modprobe-d\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.431170 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.430554 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-host-slash\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.431170 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.430589 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/d852814e-e573-4e8b-b69a-d17116e07af7-etc-tuned\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.431170 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.430622 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363-etc-selinux\") pod \"aws-ebs-csi-driver-node-qjqv8\" (UID: \"e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-qjqv8" Apr 24 14:23:43.431170 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.430646 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1b6507b4-e71a-44e1-8d03-18abcb3b225d-multus-daemon-config\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.431170 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.430652 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-systemd-units\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.431170 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.430692 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-run-systemd\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.431170 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.430727 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.431170 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.430763 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363-socket-dir\") pod \"aws-ebs-csi-driver-node-qjqv8\" (UID: \"e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-qjqv8" Apr 24 14:23:43.431170 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.430805 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-cnibin\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.431170 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.430879 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-cnibin\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.431170 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.430901 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363-etc-selinux\") pod \"aws-ebs-csi-driver-node-qjqv8\" (UID: \"e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-qjqv8" Apr 24 14:23:43.431170 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.430943 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-systemd-units\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.431170 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.430968 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2bb3748f-64b2-4249-91e8-54ba5dd9c145-host-slash\") pod \"iptables-alerter-lfbtg\" (UID: \"2bb3748f-64b2-4249-91e8-54ba5dd9c145\") " pod="openshift-network-operator/iptables-alerter-lfbtg" Apr 24 14:23:43.431170 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.430972 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d852814e-e573-4e8b-b69a-d17116e07af7-host\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.431170 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.431004 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/d852814e-e573-4e8b-b69a-d17116e07af7-etc-sysctl-conf\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.431170 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.431001 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-run-systemd\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.431170 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.431032 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d852814e-e573-4e8b-b69a-d17116e07af7-var-lib-kubelet\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.431964 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.431060 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-run-openvswitch\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.431964 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.431063 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363-sys-fs\") pod \"aws-ebs-csi-driver-node-qjqv8\" (UID: \"e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-qjqv8" Apr 24 14:23:43.431964 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.431066 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.431964 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.431129 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363-sys-fs\") pod \"aws-ebs-csi-driver-node-qjqv8\" (UID: \"e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-qjqv8" Apr 24 14:23:43.431964 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.431142 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-log-socket\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.431964 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.431182 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363-socket-dir\") pod \"aws-ebs-csi-driver-node-qjqv8\" (UID: \"e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-qjqv8" Apr 24 14:23:43.431964 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.431221 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d852814e-e573-4e8b-b69a-d17116e07af7-tmp\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.431964 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.431229 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/d852814e-e573-4e8b-b69a-d17116e07af7-etc-modprobe-d\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.431964 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.431264 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jkf44\" (UniqueName: \"kubernetes.io/projected/d852814e-e573-4e8b-b69a-d17116e07af7-kube-api-access-jkf44\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.431964 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.431297 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-etc-openvswitch\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.431964 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.431328 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-run-ovn\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.431964 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.431357 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-node-log\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.431964 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.431386 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-host-cni-netd\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.431964 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.431412 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4j45d\" (UniqueName: \"kubernetes.io/projected/090e3afb-c111-4bf0-a107-0156c2f3a0f2-kube-api-access-4j45d\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.431964 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.431444 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d852814e-e573-4e8b-b69a-d17116e07af7-run\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.431964 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.431479 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363-kubelet-dir\") pod \"aws-ebs-csi-driver-node-qjqv8\" (UID: \"e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-qjqv8" Apr 24 14:23:43.431964 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.431508 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-host-run-netns\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.432558 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.431538 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-hostroot\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.432558 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.431561 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7966ddd8-be1a-45a0-8020-2cd96b2fd595-hosts-file\") pod \"node-resolver-cmhmr\" (UID: \"7966ddd8-be1a-45a0-8020-2cd96b2fd595\") " pod="openshift-dns/node-resolver-cmhmr" Apr 24 14:23:43.432558 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.431595 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0d6bd978-a62b-4e69-9786-a9b7774d09db-cnibin\") pod \"multus-additional-cni-plugins-jvbnt\" (UID: \"0d6bd978-a62b-4e69-9786-a9b7774d09db\") " pod="openshift-multus/multus-additional-cni-plugins-jvbnt" Apr 24 14:23:43.432558 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.431625 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-host-var-lib-cni-multus\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.432558 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.431657 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0d6bd978-a62b-4e69-9786-a9b7774d09db-system-cni-dir\") pod \"multus-additional-cni-plugins-jvbnt\" (UID: \"0d6bd978-a62b-4e69-9786-a9b7774d09db\") " pod="openshift-multus/multus-additional-cni-plugins-jvbnt" Apr 24 14:23:43.432558 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.431689 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0d6bd978-a62b-4e69-9786-a9b7774d09db-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jvbnt\" (UID: \"0d6bd978-a62b-4e69-9786-a9b7774d09db\") " pod="openshift-multus/multus-additional-cni-plugins-jvbnt" Apr 24 14:23:43.432558 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.431721 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-multus-cni-dir\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.432558 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.431763 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-host-run-k8s-cni-cncf-io\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.432558 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.431790 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-multus-conf-dir\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.432558 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.431820 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/b63a7014-b666-4162-b36c-f215db9ea517-konnectivity-ca\") pod \"konnectivity-agent-dlcwf\" (UID: \"b63a7014-b666-4162-b36c-f215db9ea517\") " pod="kube-system/konnectivity-agent-dlcwf" Apr 24 14:23:43.432558 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.431851 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b554m\" (UniqueName: \"kubernetes.io/projected/0d6bd978-a62b-4e69-9786-a9b7774d09db-kube-api-access-b554m\") pod \"multus-additional-cni-plugins-jvbnt\" (UID: \"0d6bd978-a62b-4e69-9786-a9b7774d09db\") " pod="openshift-multus/multus-additional-cni-plugins-jvbnt" Apr 24 14:23:43.432558 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.431883 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2e8d53e8-f5d3-4863-b7e1-8141078a84b3-serviceca\") pod \"node-ca-k7xj4\" (UID: \"2e8d53e8-f5d3-4863-b7e1-8141078a84b3\") " pod="openshift-image-registry/node-ca-k7xj4" Apr 24 14:23:43.432558 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.431921 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-system-cni-dir\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.432558 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.431947 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-host-kubelet\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.432558 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.431976 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-host-run-netns\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.432558 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.432007 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-host-cni-bin\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.432558 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.432124 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-host-cni-bin\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.433356 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.432177 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-host-slash\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.433356 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.432606 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7966ddd8-be1a-45a0-8020-2cd96b2fd595-hosts-file\") pod \"node-resolver-cmhmr\" (UID: \"7966ddd8-be1a-45a0-8020-2cd96b2fd595\") " pod="openshift-dns/node-resolver-cmhmr" Apr 24 14:23:43.433356 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.432660 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2bb3748f-64b2-4249-91e8-54ba5dd9c145-host-slash\") pod \"iptables-alerter-lfbtg\" (UID: \"2bb3748f-64b2-4249-91e8-54ba5dd9c145\") " pod="openshift-network-operator/iptables-alerter-lfbtg" Apr 24 14:23:43.433356 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.432773 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/d852814e-e573-4e8b-b69a-d17116e07af7-etc-sysctl-conf\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.433356 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.432800 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-host-run-k8s-cni-cncf-io\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.433356 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.432829 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d852814e-e573-4e8b-b69a-d17116e07af7-var-lib-kubelet\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.433356 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.432876 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-host-var-lib-cni-multus\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.433356 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.432883 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-run-ovn\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.433356 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.431623 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/2bb3748f-64b2-4249-91e8-54ba5dd9c145-iptables-alerter-script\") pod \"iptables-alerter-lfbtg\" (UID: \"2bb3748f-64b2-4249-91e8-54ba5dd9c145\") " pod="openshift-network-operator/iptables-alerter-lfbtg" Apr 24 14:23:43.433356 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.432927 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-etc-openvswitch\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.433356 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.432988 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-node-log\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.433356 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.432990 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-multus-cni-dir\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.433356 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.433038 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-host-cni-netd\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.433356 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.433056 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363-kubelet-dir\") pod \"aws-ebs-csi-driver-node-qjqv8\" (UID: \"e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-qjqv8" Apr 24 14:23:43.433356 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.433088 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-host-run-netns\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.433356 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.433163 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-hostroot\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.433356 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.433332 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d852814e-e573-4e8b-b69a-d17116e07af7-run\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.434183 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.433784 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2e8d53e8-f5d3-4863-b7e1-8141078a84b3-serviceca\") pod \"node-ca-k7xj4\" (UID: \"2e8d53e8-f5d3-4863-b7e1-8141078a84b3\") " pod="openshift-image-registry/node-ca-k7xj4" Apr 24 14:23:43.434183 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.433887 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-multus-conf-dir\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.434183 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.434163 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-host-kubelet\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.434319 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.434212 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1b6507b4-e71a-44e1-8d03-18abcb3b225d-system-cni-dir\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.434319 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.434244 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/090e3afb-c111-4bf0-a107-0156c2f3a0f2-host-run-netns\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.434319 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.434252 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/b63a7014-b666-4162-b36c-f215db9ea517-agent-certs\") pod \"konnectivity-agent-dlcwf\" (UID: \"b63a7014-b666-4162-b36c-f215db9ea517\") " pod="kube-system/konnectivity-agent-dlcwf" Apr 24 14:23:43.434465 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.434328 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/b63a7014-b666-4162-b36c-f215db9ea517-konnectivity-ca\") pod \"konnectivity-agent-dlcwf\" (UID: \"b63a7014-b666-4162-b36c-f215db9ea517\") " pod="kube-system/konnectivity-agent-dlcwf" Apr 24 14:23:43.434465 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.434426 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/d852814e-e573-4e8b-b69a-d17116e07af7-etc-tuned\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.434871 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.434839 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d852814e-e573-4e8b-b69a-d17116e07af7-tmp\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.435410 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.435390 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/090e3afb-c111-4bf0-a107-0156c2f3a0f2-ovn-node-metrics-cert\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.439387 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:43.439367 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 24 14:23:43.439387 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:43.439389 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 24 14:23:43.439534 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:43.439404 2570 projected.go:194] Error preparing data for projected volume kube-api-access-68rcm for pod openshift-network-diagnostics/network-check-target-7b299: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 14:23:43.439534 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:43.439474 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cb704828-9d72-448f-8256-1dda6f6273ea-kube-api-access-68rcm podName:cb704828-9d72-448f-8256-1dda6f6273ea nodeName:}" failed. No retries permitted until 2026-04-24 14:23:43.939455774 +0000 UTC m=+2.064859586 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-68rcm" (UniqueName: "kubernetes.io/projected/cb704828-9d72-448f-8256-1dda6f6273ea-kube-api-access-68rcm") pod "network-check-target-7b299" (UID: "cb704828-9d72-448f-8256-1dda6f6273ea") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 14:23:43.441115 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.441069 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8vdg\" (UniqueName: \"kubernetes.io/projected/2e8d53e8-f5d3-4863-b7e1-8141078a84b3-kube-api-access-c8vdg\") pod \"node-ca-k7xj4\" (UID: \"2e8d53e8-f5d3-4863-b7e1-8141078a84b3\") " pod="openshift-image-registry/node-ca-k7xj4" Apr 24 14:23:43.441400 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.441376 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5htmq\" (UniqueName: \"kubernetes.io/projected/1b6507b4-e71a-44e1-8d03-18abcb3b225d-kube-api-access-5htmq\") pod \"multus-fn8gq\" (UID: \"1b6507b4-e71a-44e1-8d03-18abcb3b225d\") " pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.441682 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.441663 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv6h8\" (UniqueName: \"kubernetes.io/projected/62277dce-4b78-4158-9951-1292c0fa443c-kube-api-access-cv6h8\") pod \"network-metrics-daemon-ld8rd\" (UID: \"62277dce-4b78-4158-9951-1292c0fa443c\") " pod="openshift-multus/network-metrics-daemon-ld8rd" Apr 24 14:23:43.442222 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.442128 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gg7k\" (UniqueName: \"kubernetes.io/projected/e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363-kube-api-access-8gg7k\") pod \"aws-ebs-csi-driver-node-qjqv8\" (UID: \"e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-qjqv8" Apr 24 14:23:43.442291 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.442271 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q276h\" (UniqueName: \"kubernetes.io/projected/2bb3748f-64b2-4249-91e8-54ba5dd9c145-kube-api-access-q276h\") pod \"iptables-alerter-lfbtg\" (UID: \"2bb3748f-64b2-4249-91e8-54ba5dd9c145\") " pod="openshift-network-operator/iptables-alerter-lfbtg" Apr 24 14:23:43.442385 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.442364 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cckcl\" (UniqueName: \"kubernetes.io/projected/7966ddd8-be1a-45a0-8020-2cd96b2fd595-kube-api-access-cckcl\") pod \"node-resolver-cmhmr\" (UID: \"7966ddd8-be1a-45a0-8020-2cd96b2fd595\") " pod="openshift-dns/node-resolver-cmhmr" Apr 24 14:23:43.442427 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.442411 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkf44\" (UniqueName: \"kubernetes.io/projected/d852814e-e573-4e8b-b69a-d17116e07af7-kube-api-access-jkf44\") pod \"tuned-qd58c\" (UID: \"d852814e-e573-4e8b-b69a-d17116e07af7\") " pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.443043 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.443027 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j45d\" (UniqueName: \"kubernetes.io/projected/090e3afb-c111-4bf0-a107-0156c2f3a0f2-kube-api-access-4j45d\") pod \"ovnkube-node-dz989\" (UID: \"090e3afb-c111-4bf0-a107-0156c2f3a0f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.533233 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.533207 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0d6bd978-a62b-4e69-9786-a9b7774d09db-cnibin\") pod \"multus-additional-cni-plugins-jvbnt\" (UID: \"0d6bd978-a62b-4e69-9786-a9b7774d09db\") " pod="openshift-multus/multus-additional-cni-plugins-jvbnt" Apr 24 14:23:43.533233 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.533234 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0d6bd978-a62b-4e69-9786-a9b7774d09db-system-cni-dir\") pod \"multus-additional-cni-plugins-jvbnt\" (UID: \"0d6bd978-a62b-4e69-9786-a9b7774d09db\") " pod="openshift-multus/multus-additional-cni-plugins-jvbnt" Apr 24 14:23:43.533401 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.533249 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0d6bd978-a62b-4e69-9786-a9b7774d09db-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jvbnt\" (UID: \"0d6bd978-a62b-4e69-9786-a9b7774d09db\") " pod="openshift-multus/multus-additional-cni-plugins-jvbnt" Apr 24 14:23:43.533401 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.533269 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b554m\" (UniqueName: \"kubernetes.io/projected/0d6bd978-a62b-4e69-9786-a9b7774d09db-kube-api-access-b554m\") pod \"multus-additional-cni-plugins-jvbnt\" (UID: \"0d6bd978-a62b-4e69-9786-a9b7774d09db\") " pod="openshift-multus/multus-additional-cni-plugins-jvbnt" Apr 24 14:23:43.533401 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.533297 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0d6bd978-a62b-4e69-9786-a9b7774d09db-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jvbnt\" (UID: \"0d6bd978-a62b-4e69-9786-a9b7774d09db\") " pod="openshift-multus/multus-additional-cni-plugins-jvbnt" Apr 24 14:23:43.533401 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.533314 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0d6bd978-a62b-4e69-9786-a9b7774d09db-cnibin\") pod \"multus-additional-cni-plugins-jvbnt\" (UID: \"0d6bd978-a62b-4e69-9786-a9b7774d09db\") " pod="openshift-multus/multus-additional-cni-plugins-jvbnt" Apr 24 14:23:43.533401 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.533335 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0d6bd978-a62b-4e69-9786-a9b7774d09db-cni-binary-copy\") pod \"multus-additional-cni-plugins-jvbnt\" (UID: \"0d6bd978-a62b-4e69-9786-a9b7774d09db\") " pod="openshift-multus/multus-additional-cni-plugins-jvbnt" Apr 24 14:23:43.533401 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.533318 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0d6bd978-a62b-4e69-9786-a9b7774d09db-system-cni-dir\") pod \"multus-additional-cni-plugins-jvbnt\" (UID: \"0d6bd978-a62b-4e69-9786-a9b7774d09db\") " pod="openshift-multus/multus-additional-cni-plugins-jvbnt" Apr 24 14:23:43.533401 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.533362 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/0d6bd978-a62b-4e69-9786-a9b7774d09db-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-jvbnt\" (UID: \"0d6bd978-a62b-4e69-9786-a9b7774d09db\") " pod="openshift-multus/multus-additional-cni-plugins-jvbnt" Apr 24 14:23:43.533401 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.533387 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0d6bd978-a62b-4e69-9786-a9b7774d09db-os-release\") pod \"multus-additional-cni-plugins-jvbnt\" (UID: \"0d6bd978-a62b-4e69-9786-a9b7774d09db\") " pod="openshift-multus/multus-additional-cni-plugins-jvbnt" Apr 24 14:23:43.533864 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.533445 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0d6bd978-a62b-4e69-9786-a9b7774d09db-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jvbnt\" (UID: \"0d6bd978-a62b-4e69-9786-a9b7774d09db\") " pod="openshift-multus/multus-additional-cni-plugins-jvbnt" Apr 24 14:23:43.533864 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.533484 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0d6bd978-a62b-4e69-9786-a9b7774d09db-os-release\") pod \"multus-additional-cni-plugins-jvbnt\" (UID: \"0d6bd978-a62b-4e69-9786-a9b7774d09db\") " pod="openshift-multus/multus-additional-cni-plugins-jvbnt" Apr 24 14:23:43.533864 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.533755 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0d6bd978-a62b-4e69-9786-a9b7774d09db-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jvbnt\" (UID: \"0d6bd978-a62b-4e69-9786-a9b7774d09db\") " pod="openshift-multus/multus-additional-cni-plugins-jvbnt" Apr 24 14:23:43.533983 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.533889 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0d6bd978-a62b-4e69-9786-a9b7774d09db-cni-binary-copy\") pod \"multus-additional-cni-plugins-jvbnt\" (UID: \"0d6bd978-a62b-4e69-9786-a9b7774d09db\") " pod="openshift-multus/multus-additional-cni-plugins-jvbnt" Apr 24 14:23:43.534355 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.534338 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/0d6bd978-a62b-4e69-9786-a9b7774d09db-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-jvbnt\" (UID: \"0d6bd978-a62b-4e69-9786-a9b7774d09db\") " pod="openshift-multus/multus-additional-cni-plugins-jvbnt" Apr 24 14:23:43.541932 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.541912 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b554m\" (UniqueName: \"kubernetes.io/projected/0d6bd978-a62b-4e69-9786-a9b7774d09db-kube-api-access-b554m\") pod \"multus-additional-cni-plugins-jvbnt\" (UID: \"0d6bd978-a62b-4e69-9786-a9b7774d09db\") " pod="openshift-multus/multus-additional-cni-plugins-jvbnt" Apr 24 14:23:43.631972 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.631905 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fn8gq" Apr 24 14:23:43.637566 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:43.637535 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b6507b4_e71a_44e1_8d03_18abcb3b225d.slice/crio-b355a011f2c251bf9157b978a4a1ac44d03a59ef8636f6ecf3aea37146f1d8d2 WatchSource:0}: Error finding container b355a011f2c251bf9157b978a4a1ac44d03a59ef8636f6ecf3aea37146f1d8d2: Status 404 returned error can't find the container with id b355a011f2c251bf9157b978a4a1ac44d03a59ef8636f6ecf3aea37146f1d8d2 Apr 24 14:23:43.651394 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.651369 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-lfbtg" Apr 24 14:23:43.657445 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.657419 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-dlcwf" Apr 24 14:23:43.657964 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:43.657945 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bb3748f_64b2_4249_91e8_54ba5dd9c145.slice/crio-dd00629a58f93c79961dd318480a2e35d48339dbbb93704063a0736fb1ac06b4 WatchSource:0}: Error finding container dd00629a58f93c79961dd318480a2e35d48339dbbb93704063a0736fb1ac06b4: Status 404 returned error can't find the container with id dd00629a58f93c79961dd318480a2e35d48339dbbb93704063a0736fb1ac06b4 Apr 24 14:23:43.662731 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.662709 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-qd58c" Apr 24 14:23:43.663575 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:43.663558 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb63a7014_b666_4162_b36c_f215db9ea517.slice/crio-f3e6e85462dceb3ff629359ccfa2e14da4181fd711e68f38e37dc882af89e813 WatchSource:0}: Error finding container f3e6e85462dceb3ff629359ccfa2e14da4181fd711e68f38e37dc882af89e813: Status 404 returned error can't find the container with id f3e6e85462dceb3ff629359ccfa2e14da4181fd711e68f38e37dc882af89e813 Apr 24 14:23:43.667896 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.667874 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-cmhmr" Apr 24 14:23:43.672656 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:43.672633 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd852814e_e573_4e8b_b69a_d17116e07af7.slice/crio-a07dad585046158a24aae1c603ba36ea3c142f7d5e1cba254400e9e6b958a514 WatchSource:0}: Error finding container a07dad585046158a24aae1c603ba36ea3c142f7d5e1cba254400e9e6b958a514: Status 404 returned error can't find the container with id a07dad585046158a24aae1c603ba36ea3c142f7d5e1cba254400e9e6b958a514 Apr 24 14:23:43.673233 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.673216 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:23:43.677516 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:43.677497 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7966ddd8_be1a_45a0_8020_2cd96b2fd595.slice/crio-bf50eb4c8fc8859d3a00a7196db6307590d79a3b8e475798654d334a0bc5bca0 WatchSource:0}: Error finding container bf50eb4c8fc8859d3a00a7196db6307590d79a3b8e475798654d334a0bc5bca0: Status 404 returned error can't find the container with id bf50eb4c8fc8859d3a00a7196db6307590d79a3b8e475798654d334a0bc5bca0 Apr 24 14:23:43.678381 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.678361 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-qjqv8" Apr 24 14:23:43.680767 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:43.680747 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod090e3afb_c111_4bf0_a107_0156c2f3a0f2.slice/crio-f249a1ee9930d85285c206202a6c187822e43339eb23cd700684d9a3edf7ec7f WatchSource:0}: Error finding container f249a1ee9930d85285c206202a6c187822e43339eb23cd700684d9a3edf7ec7f: Status 404 returned error can't find the container with id f249a1ee9930d85285c206202a6c187822e43339eb23cd700684d9a3edf7ec7f Apr 24 14:23:43.683642 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.683625 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-k7xj4" Apr 24 14:23:43.685729 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:43.685710 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8fc5b0b_35cf_4f2f_9bdb_b0e5e3061363.slice/crio-13b5bb636c699e172f05db9b650083d5aba64cf0310e041687c23fe057544ccb WatchSource:0}: Error finding container 13b5bb636c699e172f05db9b650083d5aba64cf0310e041687c23fe057544ccb: Status 404 returned error can't find the container with id 13b5bb636c699e172f05db9b650083d5aba64cf0310e041687c23fe057544ccb Apr 24 14:23:43.688557 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.688534 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jvbnt" Apr 24 14:23:43.689956 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:43.689935 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e8d53e8_f5d3_4863_b7e1_8141078a84b3.slice/crio-076ab64c5d0dfc299cde71d5559ebb367afd272f7df7a68a6ade2ad686b361d9 WatchSource:0}: Error finding container 076ab64c5d0dfc299cde71d5559ebb367afd272f7df7a68a6ade2ad686b361d9: Status 404 returned error can't find the container with id 076ab64c5d0dfc299cde71d5559ebb367afd272f7df7a68a6ade2ad686b361d9 Apr 24 14:23:43.694578 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:23:43.694556 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d6bd978_a62b_4e69_9786_a9b7774d09db.slice/crio-5b5fda8acb536c16d9eaa209801cfd4a8b74c7be6a36e4389a24bfd4195be643 WatchSource:0}: Error finding container 5b5fda8acb536c16d9eaa209801cfd4a8b74c7be6a36e4389a24bfd4195be643: Status 404 returned error can't find the container with id 5b5fda8acb536c16d9eaa209801cfd4a8b74c7be6a36e4389a24bfd4195be643 Apr 24 14:23:43.733592 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.733574 2570 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Apr 24 14:23:43.759756 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.759731 2570 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Apr 24 14:23:43.937758 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:43.937671 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/62277dce-4b78-4158-9951-1292c0fa443c-metrics-certs\") pod \"network-metrics-daemon-ld8rd\" (UID: \"62277dce-4b78-4158-9951-1292c0fa443c\") " pod="openshift-multus/network-metrics-daemon-ld8rd" Apr 24 14:23:43.937932 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:43.937828 2570 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 14:23:43.937932 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:43.937904 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62277dce-4b78-4158-9951-1292c0fa443c-metrics-certs podName:62277dce-4b78-4158-9951-1292c0fa443c nodeName:}" failed. No retries permitted until 2026-04-24 14:23:44.937885214 +0000 UTC m=+3.063289009 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/62277dce-4b78-4158-9951-1292c0fa443c-metrics-certs") pod "network-metrics-daemon-ld8rd" (UID: "62277dce-4b78-4158-9951-1292c0fa443c") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 14:23:44.038936 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:44.038904 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-68rcm\" (UniqueName: \"kubernetes.io/projected/cb704828-9d72-448f-8256-1dda6f6273ea-kube-api-access-68rcm\") pod \"network-check-target-7b299\" (UID: \"cb704828-9d72-448f-8256-1dda6f6273ea\") " pod="openshift-network-diagnostics/network-check-target-7b299" Apr 24 14:23:44.039117 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:44.039064 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 24 14:23:44.039117 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:44.039084 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 24 14:23:44.039117 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:44.039096 2570 projected.go:194] Error preparing data for projected volume kube-api-access-68rcm for pod openshift-network-diagnostics/network-check-target-7b299: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 14:23:44.039334 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:44.039173 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cb704828-9d72-448f-8256-1dda6f6273ea-kube-api-access-68rcm podName:cb704828-9d72-448f-8256-1dda6f6273ea nodeName:}" failed. No retries permitted until 2026-04-24 14:23:45.039152614 +0000 UTC m=+3.164556413 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-68rcm" (UniqueName: "kubernetes.io/projected/cb704828-9d72-448f-8256-1dda6f6273ea-kube-api-access-68rcm") pod "network-check-target-7b299" (UID: "cb704828-9d72-448f-8256-1dda6f6273ea") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 14:23:44.364369 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:44.364273 2570 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-23 14:18:43 +0000 UTC" deadline="2027-10-07 01:10:52.569142679 +0000 UTC" Apr 24 14:23:44.364369 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:44.364315 2570 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="12730h47m8.204831928s" Apr 24 14:23:44.448012 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:44.447976 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jvbnt" event={"ID":"0d6bd978-a62b-4e69-9786-a9b7774d09db","Type":"ContainerStarted","Data":"5b5fda8acb536c16d9eaa209801cfd4a8b74c7be6a36e4389a24bfd4195be643"} Apr 24 14:23:44.462769 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:44.460845 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dz989" event={"ID":"090e3afb-c111-4bf0-a107-0156c2f3a0f2","Type":"ContainerStarted","Data":"f249a1ee9930d85285c206202a6c187822e43339eb23cd700684d9a3edf7ec7f"} Apr 24 14:23:44.474190 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:44.472207 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-cmhmr" event={"ID":"7966ddd8-be1a-45a0-8020-2cd96b2fd595","Type":"ContainerStarted","Data":"bf50eb4c8fc8859d3a00a7196db6307590d79a3b8e475798654d334a0bc5bca0"} Apr 24 14:23:44.483526 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:44.483486 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-dlcwf" event={"ID":"b63a7014-b666-4162-b36c-f215db9ea517","Type":"ContainerStarted","Data":"f3e6e85462dceb3ff629359ccfa2e14da4181fd711e68f38e37dc882af89e813"} Apr 24 14:23:44.489244 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:44.489206 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-129-34.ec2.internal" event={"ID":"52f7ef5b748605fa2e3167b9e181ddfa","Type":"ContainerStarted","Data":"442df997f206408f0ce25cc03c7c21c1c2d1b54722373afd18458ca5c78b3f17"} Apr 24 14:23:44.495621 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:44.495592 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-k7xj4" event={"ID":"2e8d53e8-f5d3-4863-b7e1-8141078a84b3","Type":"ContainerStarted","Data":"076ab64c5d0dfc299cde71d5559ebb367afd272f7df7a68a6ade2ad686b361d9"} Apr 24 14:23:44.497534 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:44.497507 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-qjqv8" event={"ID":"e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363","Type":"ContainerStarted","Data":"13b5bb636c699e172f05db9b650083d5aba64cf0310e041687c23fe057544ccb"} Apr 24 14:23:44.499482 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:44.499458 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-qd58c" event={"ID":"d852814e-e573-4e8b-b69a-d17116e07af7","Type":"ContainerStarted","Data":"a07dad585046158a24aae1c603ba36ea3c142f7d5e1cba254400e9e6b958a514"} Apr 24 14:23:44.503248 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:44.503199 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-lfbtg" event={"ID":"2bb3748f-64b2-4249-91e8-54ba5dd9c145","Type":"ContainerStarted","Data":"dd00629a58f93c79961dd318480a2e35d48339dbbb93704063a0736fb1ac06b4"} Apr 24 14:23:44.515151 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:44.515127 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fn8gq" event={"ID":"1b6507b4-e71a-44e1-8d03-18abcb3b225d","Type":"ContainerStarted","Data":"b355a011f2c251bf9157b978a4a1ac44d03a59ef8636f6ecf3aea37146f1d8d2"} Apr 24 14:23:44.671761 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:44.671679 2570 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 24 14:23:44.943897 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:44.943814 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/62277dce-4b78-4158-9951-1292c0fa443c-metrics-certs\") pod \"network-metrics-daemon-ld8rd\" (UID: \"62277dce-4b78-4158-9951-1292c0fa443c\") " pod="openshift-multus/network-metrics-daemon-ld8rd" Apr 24 14:23:44.944071 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:44.943972 2570 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 14:23:44.944071 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:44.944034 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62277dce-4b78-4158-9951-1292c0fa443c-metrics-certs podName:62277dce-4b78-4158-9951-1292c0fa443c nodeName:}" failed. No retries permitted until 2026-04-24 14:23:46.944016304 +0000 UTC m=+5.069420102 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/62277dce-4b78-4158-9951-1292c0fa443c-metrics-certs") pod "network-metrics-daemon-ld8rd" (UID: "62277dce-4b78-4158-9951-1292c0fa443c") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 14:23:45.044566 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:45.044533 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-68rcm\" (UniqueName: \"kubernetes.io/projected/cb704828-9d72-448f-8256-1dda6f6273ea-kube-api-access-68rcm\") pod \"network-check-target-7b299\" (UID: \"cb704828-9d72-448f-8256-1dda6f6273ea\") " pod="openshift-network-diagnostics/network-check-target-7b299" Apr 24 14:23:45.044780 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:45.044683 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 24 14:23:45.044780 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:45.044703 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 24 14:23:45.044780 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:45.044715 2570 projected.go:194] Error preparing data for projected volume kube-api-access-68rcm for pod openshift-network-diagnostics/network-check-target-7b299: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 14:23:45.044780 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:45.044774 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cb704828-9d72-448f-8256-1dda6f6273ea-kube-api-access-68rcm podName:cb704828-9d72-448f-8256-1dda6f6273ea nodeName:}" failed. No retries permitted until 2026-04-24 14:23:47.044755748 +0000 UTC m=+5.170159545 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-68rcm" (UniqueName: "kubernetes.io/projected/cb704828-9d72-448f-8256-1dda6f6273ea-kube-api-access-68rcm") pod "network-check-target-7b299" (UID: "cb704828-9d72-448f-8256-1dda6f6273ea") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 14:23:45.365516 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:45.365403 2570 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-23 14:18:43 +0000 UTC" deadline="2027-12-23 01:23:15.444880475 +0000 UTC" Apr 24 14:23:45.365516 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:45.365439 2570 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="14578h59m30.079444991s" Apr 24 14:23:45.425757 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:45.424907 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7b299" Apr 24 14:23:45.425757 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:45.425047 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7b299" podUID="cb704828-9d72-448f-8256-1dda6f6273ea" Apr 24 14:23:45.425757 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:45.425500 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ld8rd" Apr 24 14:23:45.425757 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:45.425610 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ld8rd" podUID="62277dce-4b78-4158-9951-1292c0fa443c" Apr 24 14:23:45.678895 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:45.678817 2570 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Apr 24 14:23:46.962351 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:46.962316 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/62277dce-4b78-4158-9951-1292c0fa443c-metrics-certs\") pod \"network-metrics-daemon-ld8rd\" (UID: \"62277dce-4b78-4158-9951-1292c0fa443c\") " pod="openshift-multus/network-metrics-daemon-ld8rd" Apr 24 14:23:46.962775 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:46.962509 2570 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 14:23:46.962775 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:46.962574 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62277dce-4b78-4158-9951-1292c0fa443c-metrics-certs podName:62277dce-4b78-4158-9951-1292c0fa443c nodeName:}" failed. No retries permitted until 2026-04-24 14:23:50.962554543 +0000 UTC m=+9.087958339 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/62277dce-4b78-4158-9951-1292c0fa443c-metrics-certs") pod "network-metrics-daemon-ld8rd" (UID: "62277dce-4b78-4158-9951-1292c0fa443c") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 14:23:47.062863 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:47.062816 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-68rcm\" (UniqueName: \"kubernetes.io/projected/cb704828-9d72-448f-8256-1dda6f6273ea-kube-api-access-68rcm\") pod \"network-check-target-7b299\" (UID: \"cb704828-9d72-448f-8256-1dda6f6273ea\") " pod="openshift-network-diagnostics/network-check-target-7b299" Apr 24 14:23:47.063014 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:47.062982 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 24 14:23:47.063014 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:47.063002 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 24 14:23:47.063014 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:47.063015 2570 projected.go:194] Error preparing data for projected volume kube-api-access-68rcm for pod openshift-network-diagnostics/network-check-target-7b299: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 14:23:47.063196 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:47.063129 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cb704828-9d72-448f-8256-1dda6f6273ea-kube-api-access-68rcm podName:cb704828-9d72-448f-8256-1dda6f6273ea nodeName:}" failed. No retries permitted until 2026-04-24 14:23:51.063080735 +0000 UTC m=+9.188484543 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-68rcm" (UniqueName: "kubernetes.io/projected/cb704828-9d72-448f-8256-1dda6f6273ea-kube-api-access-68rcm") pod "network-check-target-7b299" (UID: "cb704828-9d72-448f-8256-1dda6f6273ea") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 14:23:47.425127 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:47.425027 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ld8rd" Apr 24 14:23:47.425296 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:47.425190 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ld8rd" podUID="62277dce-4b78-4158-9951-1292c0fa443c" Apr 24 14:23:47.425643 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:47.425624 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7b299" Apr 24 14:23:47.425740 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:47.425718 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7b299" podUID="cb704828-9d72-448f-8256-1dda6f6273ea" Apr 24 14:23:49.424626 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:49.424587 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ld8rd" Apr 24 14:23:49.424626 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:49.424612 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7b299" Apr 24 14:23:49.425170 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:49.424739 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ld8rd" podUID="62277dce-4b78-4158-9951-1292c0fa443c" Apr 24 14:23:49.425170 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:49.424879 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7b299" podUID="cb704828-9d72-448f-8256-1dda6f6273ea" Apr 24 14:23:50.997297 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:50.997224 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/62277dce-4b78-4158-9951-1292c0fa443c-metrics-certs\") pod \"network-metrics-daemon-ld8rd\" (UID: \"62277dce-4b78-4158-9951-1292c0fa443c\") " pod="openshift-multus/network-metrics-daemon-ld8rd" Apr 24 14:23:50.997753 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:50.997468 2570 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 14:23:50.997753 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:50.997534 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62277dce-4b78-4158-9951-1292c0fa443c-metrics-certs podName:62277dce-4b78-4158-9951-1292c0fa443c nodeName:}" failed. No retries permitted until 2026-04-24 14:23:58.997513004 +0000 UTC m=+17.122916809 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/62277dce-4b78-4158-9951-1292c0fa443c-metrics-certs") pod "network-metrics-daemon-ld8rd" (UID: "62277dce-4b78-4158-9951-1292c0fa443c") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 14:23:51.098112 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:51.098035 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-68rcm\" (UniqueName: \"kubernetes.io/projected/cb704828-9d72-448f-8256-1dda6f6273ea-kube-api-access-68rcm\") pod \"network-check-target-7b299\" (UID: \"cb704828-9d72-448f-8256-1dda6f6273ea\") " pod="openshift-network-diagnostics/network-check-target-7b299" Apr 24 14:23:51.098285 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:51.098233 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 24 14:23:51.098285 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:51.098260 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 24 14:23:51.098285 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:51.098276 2570 projected.go:194] Error preparing data for projected volume kube-api-access-68rcm for pod openshift-network-diagnostics/network-check-target-7b299: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 14:23:51.098417 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:51.098340 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cb704828-9d72-448f-8256-1dda6f6273ea-kube-api-access-68rcm podName:cb704828-9d72-448f-8256-1dda6f6273ea nodeName:}" failed. No retries permitted until 2026-04-24 14:23:59.098320672 +0000 UTC m=+17.223724467 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-68rcm" (UniqueName: "kubernetes.io/projected/cb704828-9d72-448f-8256-1dda6f6273ea-kube-api-access-68rcm") pod "network-check-target-7b299" (UID: "cb704828-9d72-448f-8256-1dda6f6273ea") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 14:23:51.425004 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:51.424935 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ld8rd" Apr 24 14:23:51.425162 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:51.425085 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ld8rd" podUID="62277dce-4b78-4158-9951-1292c0fa443c" Apr 24 14:23:51.425478 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:51.425458 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7b299" Apr 24 14:23:51.425596 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:51.425555 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7b299" podUID="cb704828-9d72-448f-8256-1dda6f6273ea" Apr 24 14:23:52.578056 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:52.578019 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kube-system/global-pull-secret-syncer-wz4gq"] Apr 24 14:23:52.581030 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:52.581003 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-wz4gq" Apr 24 14:23:52.581181 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:52.581084 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-wz4gq" podUID="0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b" Apr 24 14:23:52.610161 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:52.610129 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b-kubelet-config\") pod \"global-pull-secret-syncer-wz4gq\" (UID: \"0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b\") " pod="kube-system/global-pull-secret-syncer-wz4gq" Apr 24 14:23:52.610161 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:52.610169 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b-dbus\") pod \"global-pull-secret-syncer-wz4gq\" (UID: \"0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b\") " pod="kube-system/global-pull-secret-syncer-wz4gq" Apr 24 14:23:52.610389 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:52.610217 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b-original-pull-secret\") pod \"global-pull-secret-syncer-wz4gq\" (UID: \"0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b\") " pod="kube-system/global-pull-secret-syncer-wz4gq" Apr 24 14:23:52.711485 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:52.711444 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b-kubelet-config\") pod \"global-pull-secret-syncer-wz4gq\" (UID: \"0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b\") " pod="kube-system/global-pull-secret-syncer-wz4gq" Apr 24 14:23:52.711648 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:52.711492 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b-dbus\") pod \"global-pull-secret-syncer-wz4gq\" (UID: \"0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b\") " pod="kube-system/global-pull-secret-syncer-wz4gq" Apr 24 14:23:52.711648 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:52.711522 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b-original-pull-secret\") pod \"global-pull-secret-syncer-wz4gq\" (UID: \"0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b\") " pod="kube-system/global-pull-secret-syncer-wz4gq" Apr 24 14:23:52.711648 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:52.711579 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b-kubelet-config\") pod \"global-pull-secret-syncer-wz4gq\" (UID: \"0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b\") " pod="kube-system/global-pull-secret-syncer-wz4gq" Apr 24 14:23:52.711648 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:52.711638 2570 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 24 14:23:52.711887 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:52.711702 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b-original-pull-secret podName:0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b nodeName:}" failed. No retries permitted until 2026-04-24 14:23:53.211682765 +0000 UTC m=+11.337086567 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b-original-pull-secret") pod "global-pull-secret-syncer-wz4gq" (UID: "0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b") : object "kube-system"/"original-pull-secret" not registered Apr 24 14:23:52.711887 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:52.711708 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b-dbus\") pod \"global-pull-secret-syncer-wz4gq\" (UID: \"0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b\") " pod="kube-system/global-pull-secret-syncer-wz4gq" Apr 24 14:23:53.215514 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:53.215485 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b-original-pull-secret\") pod \"global-pull-secret-syncer-wz4gq\" (UID: \"0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b\") " pod="kube-system/global-pull-secret-syncer-wz4gq" Apr 24 14:23:53.215655 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:53.215634 2570 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 24 14:23:53.215717 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:53.215706 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b-original-pull-secret podName:0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b nodeName:}" failed. No retries permitted until 2026-04-24 14:23:54.21569064 +0000 UTC m=+12.341094439 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b-original-pull-secret") pod "global-pull-secret-syncer-wz4gq" (UID: "0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b") : object "kube-system"/"original-pull-secret" not registered Apr 24 14:23:53.424518 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:53.424484 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7b299" Apr 24 14:23:53.424649 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:53.424495 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ld8rd" Apr 24 14:23:53.424649 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:53.424579 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7b299" podUID="cb704828-9d72-448f-8256-1dda6f6273ea" Apr 24 14:23:53.424762 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:53.424652 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ld8rd" podUID="62277dce-4b78-4158-9951-1292c0fa443c" Apr 24 14:23:54.224429 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:54.224394 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b-original-pull-secret\") pod \"global-pull-secret-syncer-wz4gq\" (UID: \"0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b\") " pod="kube-system/global-pull-secret-syncer-wz4gq" Apr 24 14:23:54.225174 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:54.224569 2570 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 24 14:23:54.225174 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:54.224667 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b-original-pull-secret podName:0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b nodeName:}" failed. No retries permitted until 2026-04-24 14:23:56.224645881 +0000 UTC m=+14.350049677 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b-original-pull-secret") pod "global-pull-secret-syncer-wz4gq" (UID: "0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b") : object "kube-system"/"original-pull-secret" not registered Apr 24 14:23:54.424279 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:54.424246 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-wz4gq" Apr 24 14:23:54.424424 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:54.424380 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-wz4gq" podUID="0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b" Apr 24 14:23:55.424731 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:55.424698 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ld8rd" Apr 24 14:23:55.425173 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:55.424699 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7b299" Apr 24 14:23:55.425173 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:55.424822 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ld8rd" podUID="62277dce-4b78-4158-9951-1292c0fa443c" Apr 24 14:23:55.425173 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:55.424875 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7b299" podUID="cb704828-9d72-448f-8256-1dda6f6273ea" Apr 24 14:23:56.240643 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:56.240600 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b-original-pull-secret\") pod \"global-pull-secret-syncer-wz4gq\" (UID: \"0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b\") " pod="kube-system/global-pull-secret-syncer-wz4gq" Apr 24 14:23:56.240830 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:56.240773 2570 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 24 14:23:56.240894 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:56.240850 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b-original-pull-secret podName:0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b nodeName:}" failed. No retries permitted until 2026-04-24 14:24:00.240828325 +0000 UTC m=+18.366232124 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b-original-pull-secret") pod "global-pull-secret-syncer-wz4gq" (UID: "0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b") : object "kube-system"/"original-pull-secret" not registered Apr 24 14:23:56.424422 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:56.424387 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-wz4gq" Apr 24 14:23:56.424597 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:56.424511 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-wz4gq" podUID="0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b" Apr 24 14:23:57.424567 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:57.424435 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ld8rd" Apr 24 14:23:57.424567 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:57.424462 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7b299" Apr 24 14:23:57.425154 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:57.424570 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ld8rd" podUID="62277dce-4b78-4158-9951-1292c0fa443c" Apr 24 14:23:57.425154 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:57.424701 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7b299" podUID="cb704828-9d72-448f-8256-1dda6f6273ea" Apr 24 14:23:58.428874 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:58.428841 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-wz4gq" Apr 24 14:23:58.429340 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:58.428995 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-wz4gq" podUID="0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b" Apr 24 14:23:59.062333 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:59.062292 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/62277dce-4b78-4158-9951-1292c0fa443c-metrics-certs\") pod \"network-metrics-daemon-ld8rd\" (UID: \"62277dce-4b78-4158-9951-1292c0fa443c\") " pod="openshift-multus/network-metrics-daemon-ld8rd" Apr 24 14:23:59.062536 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:59.062457 2570 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 14:23:59.062585 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:59.062537 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62277dce-4b78-4158-9951-1292c0fa443c-metrics-certs podName:62277dce-4b78-4158-9951-1292c0fa443c nodeName:}" failed. No retries permitted until 2026-04-24 14:24:15.062517585 +0000 UTC m=+33.187921381 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/62277dce-4b78-4158-9951-1292c0fa443c-metrics-certs") pod "network-metrics-daemon-ld8rd" (UID: "62277dce-4b78-4158-9951-1292c0fa443c") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 14:23:59.162685 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:59.162646 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-68rcm\" (UniqueName: \"kubernetes.io/projected/cb704828-9d72-448f-8256-1dda6f6273ea-kube-api-access-68rcm\") pod \"network-check-target-7b299\" (UID: \"cb704828-9d72-448f-8256-1dda6f6273ea\") " pod="openshift-network-diagnostics/network-check-target-7b299" Apr 24 14:23:59.162868 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:59.162782 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 24 14:23:59.162868 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:59.162809 2570 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 24 14:23:59.162868 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:59.162825 2570 projected.go:194] Error preparing data for projected volume kube-api-access-68rcm for pod openshift-network-diagnostics/network-check-target-7b299: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 14:23:59.163023 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:59.162880 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cb704828-9d72-448f-8256-1dda6f6273ea-kube-api-access-68rcm podName:cb704828-9d72-448f-8256-1dda6f6273ea nodeName:}" failed. No retries permitted until 2026-04-24 14:24:15.162867158 +0000 UTC m=+33.288270956 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-68rcm" (UniqueName: "kubernetes.io/projected/cb704828-9d72-448f-8256-1dda6f6273ea-kube-api-access-68rcm") pod "network-check-target-7b299" (UID: "cb704828-9d72-448f-8256-1dda6f6273ea") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 14:23:59.424495 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:59.424407 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7b299" Apr 24 14:23:59.424661 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:23:59.424410 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ld8rd" Apr 24 14:23:59.424661 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:59.424549 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7b299" podUID="cb704828-9d72-448f-8256-1dda6f6273ea" Apr 24 14:23:59.424661 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:23:59.424641 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ld8rd" podUID="62277dce-4b78-4158-9951-1292c0fa443c" Apr 24 14:24:00.272315 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:00.272081 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b-original-pull-secret\") pod \"global-pull-secret-syncer-wz4gq\" (UID: \"0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b\") " pod="kube-system/global-pull-secret-syncer-wz4gq" Apr 24 14:24:00.272869 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:00.272230 2570 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 24 14:24:00.272869 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:00.272408 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b-original-pull-secret podName:0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b nodeName:}" failed. No retries permitted until 2026-04-24 14:24:08.27238561 +0000 UTC m=+26.397789404 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b-original-pull-secret") pod "global-pull-secret-syncer-wz4gq" (UID: "0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b") : object "kube-system"/"original-pull-secret" not registered Apr 24 14:24:00.424746 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:00.424712 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-wz4gq" Apr 24 14:24:00.424931 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:00.424842 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-wz4gq" podUID="0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b" Apr 24 14:24:01.424985 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:01.424956 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ld8rd" Apr 24 14:24:01.425356 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:01.424963 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7b299" Apr 24 14:24:01.425356 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:01.425049 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ld8rd" podUID="62277dce-4b78-4158-9951-1292c0fa443c" Apr 24 14:24:01.425356 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:01.425179 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7b299" podUID="cb704828-9d72-448f-8256-1dda6f6273ea" Apr 24 14:24:02.426535 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:02.426334 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-wz4gq" Apr 24 14:24:02.427053 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:02.426552 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-wz4gq" podUID="0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b" Apr 24 14:24:02.560220 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:02.559964 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fn8gq" event={"ID":"1b6507b4-e71a-44e1-8d03-18abcb3b225d","Type":"ContainerStarted","Data":"eb042de8c54bb4cc417c9f25223b204d5fe725c22d6708a71025e86de7f46e22"} Apr 24 14:24:02.561271 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:02.561248 2570 generic.go:358] "Generic (PLEG): container finished" podID="84b73010dde18c2e537db575f397d1b5" containerID="7a29213636d3610aec9f8936c6bde66aa99f8ea9ff035b29ff66e9b8e0c7dda3" exitCode=0 Apr 24 14:24:02.561388 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:02.561304 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-34.ec2.internal" event={"ID":"84b73010dde18c2e537db575f397d1b5","Type":"ContainerDied","Data":"7a29213636d3610aec9f8936c6bde66aa99f8ea9ff035b29ff66e9b8e0c7dda3"} Apr 24 14:24:02.562623 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:02.562598 2570 generic.go:358] "Generic (PLEG): container finished" podID="0d6bd978-a62b-4e69-9786-a9b7774d09db" containerID="63143386b2128f79cf9f01c9fa3e3297f027bdc087a02a0fa04fe5ea0ee084c2" exitCode=0 Apr 24 14:24:02.562714 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:02.562675 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jvbnt" event={"ID":"0d6bd978-a62b-4e69-9786-a9b7774d09db","Type":"ContainerDied","Data":"63143386b2128f79cf9f01c9fa3e3297f027bdc087a02a0fa04fe5ea0ee084c2"} Apr 24 14:24:02.565164 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:02.565149 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dz989_090e3afb-c111-4bf0-a107-0156c2f3a0f2/ovn-acl-logging/0.log" Apr 24 14:24:02.565465 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:02.565449 2570 generic.go:358] "Generic (PLEG): container finished" podID="090e3afb-c111-4bf0-a107-0156c2f3a0f2" containerID="16bfc7dd876c6bc650f71fa76d7ea84ad6b681f1504101ef46f48604ab6a2a44" exitCode=1 Apr 24 14:24:02.565548 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:02.565510 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dz989" event={"ID":"090e3afb-c111-4bf0-a107-0156c2f3a0f2","Type":"ContainerStarted","Data":"a69f32b57c988a3c54234836bdafe5629651ad5da34f1f0ac30ecc70b6fa731c"} Apr 24 14:24:02.565548 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:02.565543 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dz989" event={"ID":"090e3afb-c111-4bf0-a107-0156c2f3a0f2","Type":"ContainerStarted","Data":"0f88eca6aeaf7ac14b6f9e2d8974925fb23b7076a56dc0527f3104dbb278f637"} Apr 24 14:24:02.565638 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:02.565555 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dz989" event={"ID":"090e3afb-c111-4bf0-a107-0156c2f3a0f2","Type":"ContainerStarted","Data":"7622be49d03b3c9449494be41d8bf58e31bfe7494bde1952c299a6d0ac82c85a"} Apr 24 14:24:02.565638 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:02.565564 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dz989" event={"ID":"090e3afb-c111-4bf0-a107-0156c2f3a0f2","Type":"ContainerStarted","Data":"3e3d7f1a00e60511678645bc4c1e7511120ff8b4d9394e6c602b4d9dc5ee7ab3"} Apr 24 14:24:02.565638 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:02.565573 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dz989" event={"ID":"090e3afb-c111-4bf0-a107-0156c2f3a0f2","Type":"ContainerDied","Data":"16bfc7dd876c6bc650f71fa76d7ea84ad6b681f1504101ef46f48604ab6a2a44"} Apr 24 14:24:02.565638 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:02.565587 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dz989" event={"ID":"090e3afb-c111-4bf0-a107-0156c2f3a0f2","Type":"ContainerStarted","Data":"03c19450a2a2e8f0632c4596b0281e7cff12489c153df6b5b5e3a7cd8847789b"} Apr 24 14:24:02.566685 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:02.566656 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-cmhmr" event={"ID":"7966ddd8-be1a-45a0-8020-2cd96b2fd595","Type":"ContainerStarted","Data":"af7f170d6c43508e5ac04a1d925333387dcd27401c58dea81487d04433e1b0f2"} Apr 24 14:24:02.567778 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:02.567756 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-dlcwf" event={"ID":"b63a7014-b666-4162-b36c-f215db9ea517","Type":"ContainerStarted","Data":"0333079726a112be24b4334024a9ab2d3e8177bc363c8fd4ae69706902f5be72"} Apr 24 14:24:02.568839 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:02.568812 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-129-34.ec2.internal" event={"ID":"52f7ef5b748605fa2e3167b9e181ddfa","Type":"ContainerStarted","Data":"1242e6192977cf65ae303b06d2913e29b575c954414525934fca7b8446306bf1"} Apr 24 14:24:02.570594 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:02.570522 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-k7xj4" event={"ID":"2e8d53e8-f5d3-4863-b7e1-8141078a84b3","Type":"ContainerStarted","Data":"15638c065e9b079341cc42ae03c78850ab4d8b8d7d33fbc1792c65dd98470813"} Apr 24 14:24:02.571783 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:02.571764 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-qjqv8" event={"ID":"e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363","Type":"ContainerStarted","Data":"ba2f221a8816041e879e136e8d4f5c374064f4b7c53bc7a8bc734064b7baa851"} Apr 24 14:24:02.572841 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:02.572816 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-qd58c" event={"ID":"d852814e-e573-4e8b-b69a-d17116e07af7","Type":"ContainerStarted","Data":"1d2d27b48da570c3c50f19b47edd86b50d7b4496b9bfd1869fc9034b417295a4"} Apr 24 14:24:02.575442 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:02.575391 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-fn8gq" podStartSLOduration=2.742239498 podStartE2EDuration="20.575379012s" podCreationTimestamp="2026-04-24 14:23:42 +0000 UTC" firstStartedPulling="2026-04-24 14:23:43.639421475 +0000 UTC m=+1.764825267" lastFinishedPulling="2026-04-24 14:24:01.472560989 +0000 UTC m=+19.597964781" observedRunningTime="2026-04-24 14:24:02.575245343 +0000 UTC m=+20.700649187" watchObservedRunningTime="2026-04-24 14:24:02.575379012 +0000 UTC m=+20.700782829" Apr 24 14:24:02.614913 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:02.614861 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/konnectivity-agent-dlcwf" podStartSLOduration=2.873839314 podStartE2EDuration="20.614845206s" podCreationTimestamp="2026-04-24 14:23:42 +0000 UTC" firstStartedPulling="2026-04-24 14:23:43.666391385 +0000 UTC m=+1.791795178" lastFinishedPulling="2026-04-24 14:24:01.407397263 +0000 UTC m=+19.532801070" observedRunningTime="2026-04-24 14:24:02.61468155 +0000 UTC m=+20.740085365" watchObservedRunningTime="2026-04-24 14:24:02.614845206 +0000 UTC m=+20.740249021" Apr 24 14:24:02.629490 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:02.629435 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-cmhmr" podStartSLOduration=2.900902717 podStartE2EDuration="20.629417459s" podCreationTimestamp="2026-04-24 14:23:42 +0000 UTC" firstStartedPulling="2026-04-24 14:23:43.679243297 +0000 UTC m=+1.804647093" lastFinishedPulling="2026-04-24 14:24:01.407758043 +0000 UTC m=+19.533161835" observedRunningTime="2026-04-24 14:24:02.629357475 +0000 UTC m=+20.754761291" watchObservedRunningTime="2026-04-24 14:24:02.629417459 +0000 UTC m=+20.754821275" Apr 24 14:24:02.643018 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:02.642979 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-proxy-ip-10-0-129-34.ec2.internal" podStartSLOduration=19.642964044 podStartE2EDuration="19.642964044s" podCreationTimestamp="2026-04-24 14:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 14:24:02.642843551 +0000 UTC m=+20.768247365" watchObservedRunningTime="2026-04-24 14:24:02.642964044 +0000 UTC m=+20.768367858" Apr 24 14:24:02.659726 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:02.659678 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-qd58c" podStartSLOduration=2.875710017 podStartE2EDuration="20.659654668s" podCreationTimestamp="2026-04-24 14:23:42 +0000 UTC" firstStartedPulling="2026-04-24 14:23:43.674182426 +0000 UTC m=+1.799586227" lastFinishedPulling="2026-04-24 14:24:01.458127071 +0000 UTC m=+19.583530878" observedRunningTime="2026-04-24 14:24:02.658982779 +0000 UTC m=+20.784386594" watchObservedRunningTime="2026-04-24 14:24:02.659654668 +0000 UTC m=+20.785058482" Apr 24 14:24:02.672594 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:02.672548 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-k7xj4" podStartSLOduration=7.562875357 podStartE2EDuration="20.672533637s" podCreationTimestamp="2026-04-24 14:23:42 +0000 UTC" firstStartedPulling="2026-04-24 14:23:43.691380297 +0000 UTC m=+1.816784090" lastFinishedPulling="2026-04-24 14:23:56.801038564 +0000 UTC m=+14.926442370" observedRunningTime="2026-04-24 14:24:02.672495379 +0000 UTC m=+20.797899195" watchObservedRunningTime="2026-04-24 14:24:02.672533637 +0000 UTC m=+20.797937451" Apr 24 14:24:03.147619 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:03.147595 2570 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock" Apr 24 14:24:03.394376 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:03.394234 2570 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock","Timestamp":"2026-04-24T14:24:03.147613373Z","UUID":"1369e850-86fb-4171-8223-1b685d2ddcdd","Handler":null,"Name":"","Endpoint":""} Apr 24 14:24:03.396576 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:03.396551 2570 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: ebs.csi.aws.com endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock versions: 1.0.0 Apr 24 14:24:03.396576 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:03.396580 2570 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: ebs.csi.aws.com at endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock Apr 24 14:24:03.424399 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:03.424365 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7b299" Apr 24 14:24:03.424547 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:03.424373 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ld8rd" Apr 24 14:24:03.424547 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:03.424477 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7b299" podUID="cb704828-9d72-448f-8256-1dda6f6273ea" Apr 24 14:24:03.424642 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:03.424593 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ld8rd" podUID="62277dce-4b78-4158-9951-1292c0fa443c" Apr 24 14:24:03.577713 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:03.577673 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-34.ec2.internal" event={"ID":"84b73010dde18c2e537db575f397d1b5","Type":"ContainerStarted","Data":"eabd7eda4d93fc7c52a9bba70360668d4e8c9512ee4d0c4bdab99681142132ec"} Apr 24 14:24:03.579927 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:03.579898 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-qjqv8" event={"ID":"e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363","Type":"ContainerStarted","Data":"f6fc672e3941a212f5fc09f4b5e3c3aa14f5e7850a087689bf9f5da17e31bb68"} Apr 24 14:24:03.581524 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:03.581499 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-lfbtg" event={"ID":"2bb3748f-64b2-4249-91e8-54ba5dd9c145","Type":"ContainerStarted","Data":"ad050960212be3620934b772eb39f5ecb69d4ce4cb62f30d3dec6700099abea0"} Apr 24 14:24:03.591506 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:03.591463 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-34.ec2.internal" podStartSLOduration=20.591451095 podStartE2EDuration="20.591451095s" podCreationTimestamp="2026-04-24 14:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 14:24:03.591330419 +0000 UTC m=+21.716734237" watchObservedRunningTime="2026-04-24 14:24:03.591451095 +0000 UTC m=+21.716854913" Apr 24 14:24:03.603915 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:03.603869 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/iptables-alerter-lfbtg" podStartSLOduration=3.804505236 podStartE2EDuration="21.603854322s" podCreationTimestamp="2026-04-24 14:23:42 +0000 UTC" firstStartedPulling="2026-04-24 14:23:43.659263693 +0000 UTC m=+1.784667488" lastFinishedPulling="2026-04-24 14:24:01.458612771 +0000 UTC m=+19.584016574" observedRunningTime="2026-04-24 14:24:03.603476421 +0000 UTC m=+21.728880237" watchObservedRunningTime="2026-04-24 14:24:03.603854322 +0000 UTC m=+21.729258140" Apr 24 14:24:03.883268 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:03.883240 2570 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/konnectivity-agent-dlcwf" Apr 24 14:24:03.884341 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:03.884320 2570 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/konnectivity-agent-dlcwf" Apr 24 14:24:04.425245 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:04.425211 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-wz4gq" Apr 24 14:24:04.425414 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:04.425355 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-wz4gq" podUID="0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b" Apr 24 14:24:04.586324 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:04.586243 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dz989_090e3afb-c111-4bf0-a107-0156c2f3a0f2/ovn-acl-logging/0.log" Apr 24 14:24:04.586736 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:04.586644 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dz989" event={"ID":"090e3afb-c111-4bf0-a107-0156c2f3a0f2","Type":"ContainerStarted","Data":"fc6fd9680eb8c415c59b32e0d7bad8885eae3d0e7e3a9a63545c52a6bc9acef8"} Apr 24 14:24:04.588612 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:04.588573 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-qjqv8" event={"ID":"e8fc5b0b-35cf-4f2f-9bdb-b0e5e3061363","Type":"ContainerStarted","Data":"a4a987055d36b92151385ee75f64504b17ea7a14517330fb844a159745c2dc62"} Apr 24 14:24:04.589476 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:04.589433 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kube-system/konnectivity-agent-dlcwf" Apr 24 14:24:04.589641 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:04.589622 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/konnectivity-agent-dlcwf" Apr 24 14:24:04.614397 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:04.614352 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-qjqv8" podStartSLOduration=2.420085878 podStartE2EDuration="22.614339984s" podCreationTimestamp="2026-04-24 14:23:42 +0000 UTC" firstStartedPulling="2026-04-24 14:23:43.68721705 +0000 UTC m=+1.812620844" lastFinishedPulling="2026-04-24 14:24:03.881471144 +0000 UTC m=+22.006874950" observedRunningTime="2026-04-24 14:24:04.613784232 +0000 UTC m=+22.739188049" watchObservedRunningTime="2026-04-24 14:24:04.614339984 +0000 UTC m=+22.739743798" Apr 24 14:24:05.425262 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:05.425224 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7b299" Apr 24 14:24:05.425419 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:05.425275 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ld8rd" Apr 24 14:24:05.425419 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:05.425357 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7b299" podUID="cb704828-9d72-448f-8256-1dda6f6273ea" Apr 24 14:24:05.425502 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:05.425462 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ld8rd" podUID="62277dce-4b78-4158-9951-1292c0fa443c" Apr 24 14:24:06.424910 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:06.424875 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-wz4gq" Apr 24 14:24:06.425321 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:06.424991 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-wz4gq" podUID="0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b" Apr 24 14:24:07.424955 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:07.424799 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ld8rd" Apr 24 14:24:07.425601 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:07.424799 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7b299" Apr 24 14:24:07.425601 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:07.425028 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ld8rd" podUID="62277dce-4b78-4158-9951-1292c0fa443c" Apr 24 14:24:07.425601 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:07.425088 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7b299" podUID="cb704828-9d72-448f-8256-1dda6f6273ea" Apr 24 14:24:07.596054 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:07.596020 2570 generic.go:358] "Generic (PLEG): container finished" podID="0d6bd978-a62b-4e69-9786-a9b7774d09db" containerID="d69557719bd879cfeaaa9ffc96688226a77bbae1b79d9650176fa802cd4bb05a" exitCode=0 Apr 24 14:24:07.596202 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:07.596117 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jvbnt" event={"ID":"0d6bd978-a62b-4e69-9786-a9b7774d09db","Type":"ContainerDied","Data":"d69557719bd879cfeaaa9ffc96688226a77bbae1b79d9650176fa802cd4bb05a"} Apr 24 14:24:07.599046 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:07.599031 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dz989_090e3afb-c111-4bf0-a107-0156c2f3a0f2/ovn-acl-logging/0.log" Apr 24 14:24:07.599379 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:07.599361 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dz989" event={"ID":"090e3afb-c111-4bf0-a107-0156c2f3a0f2","Type":"ContainerStarted","Data":"f0bfcb65be841faeb435bb2dd08344ce4392d20513434f5e35e2f7e2d5872737"} Apr 24 14:24:07.599682 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:07.599661 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:24:07.599682 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:07.599688 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:24:07.599837 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:07.599798 2570 scope.go:117] "RemoveContainer" containerID="16bfc7dd876c6bc650f71fa76d7ea84ad6b681f1504101ef46f48604ab6a2a44" Apr 24 14:24:07.614243 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:07.614225 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:24:07.614421 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:07.614410 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:24:08.329460 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:08.329388 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b-original-pull-secret\") pod \"global-pull-secret-syncer-wz4gq\" (UID: \"0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b\") " pod="kube-system/global-pull-secret-syncer-wz4gq" Apr 24 14:24:08.329618 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:08.329557 2570 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 24 14:24:08.329660 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:08.329626 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b-original-pull-secret podName:0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b nodeName:}" failed. No retries permitted until 2026-04-24 14:24:24.329610783 +0000 UTC m=+42.455014580 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b-original-pull-secret") pod "global-pull-secret-syncer-wz4gq" (UID: "0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b") : object "kube-system"/"original-pull-secret" not registered Apr 24 14:24:08.425262 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:08.425028 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-wz4gq" Apr 24 14:24:08.425626 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:08.425370 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-wz4gq" podUID="0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b" Apr 24 14:24:08.603690 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:08.603665 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jvbnt" event={"ID":"0d6bd978-a62b-4e69-9786-a9b7774d09db","Type":"ContainerStarted","Data":"bac58ebac3ed10c9a20e401a46b5b99078d7f7f289e9e9030c3e2ce16252000a"} Apr 24 14:24:08.607432 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:08.607414 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dz989_090e3afb-c111-4bf0-a107-0156c2f3a0f2/ovn-acl-logging/0.log" Apr 24 14:24:08.607855 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:08.607827 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dz989" event={"ID":"090e3afb-c111-4bf0-a107-0156c2f3a0f2","Type":"ContainerStarted","Data":"66415198ccb668abdaf4d7cfa7440d4c1cbac64b4e0cf1244e417ff0c0e11d2c"} Apr 24 14:24:08.608045 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:08.608023 2570 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 14:24:08.649956 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:08.649899 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-dz989" podStartSLOduration=8.629475243 podStartE2EDuration="26.649883234s" podCreationTimestamp="2026-04-24 14:23:42 +0000 UTC" firstStartedPulling="2026-04-24 14:23:43.682329566 +0000 UTC m=+1.807733362" lastFinishedPulling="2026-04-24 14:24:01.702737546 +0000 UTC m=+19.828141353" observedRunningTime="2026-04-24 14:24:08.6495348 +0000 UTC m=+26.774938618" watchObservedRunningTime="2026-04-24 14:24:08.649883234 +0000 UTC m=+26.775287049" Apr 24 14:24:08.883726 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:08.883646 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-7b299"] Apr 24 14:24:08.883875 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:08.883771 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7b299" Apr 24 14:24:08.883946 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:08.883868 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7b299" podUID="cb704828-9d72-448f-8256-1dda6f6273ea" Apr 24 14:24:08.886455 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:08.886433 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-wz4gq"] Apr 24 14:24:08.886572 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:08.886508 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-wz4gq" Apr 24 14:24:08.886628 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:08.886593 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-wz4gq" podUID="0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b" Apr 24 14:24:08.889555 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:08.889534 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-ld8rd"] Apr 24 14:24:08.889652 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:08.889639 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ld8rd" Apr 24 14:24:08.889750 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:08.889735 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ld8rd" podUID="62277dce-4b78-4158-9951-1292c0fa443c" Apr 24 14:24:09.611593 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:09.611559 2570 generic.go:358] "Generic (PLEG): container finished" podID="0d6bd978-a62b-4e69-9786-a9b7774d09db" containerID="bac58ebac3ed10c9a20e401a46b5b99078d7f7f289e9e9030c3e2ce16252000a" exitCode=0 Apr 24 14:24:09.611931 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:09.611651 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jvbnt" event={"ID":"0d6bd978-a62b-4e69-9786-a9b7774d09db","Type":"ContainerDied","Data":"bac58ebac3ed10c9a20e401a46b5b99078d7f7f289e9e9030c3e2ce16252000a"} Apr 24 14:24:09.611931 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:09.611801 2570 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 14:24:10.425252 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:10.425219 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7b299" Apr 24 14:24:10.425430 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:10.425219 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ld8rd" Apr 24 14:24:10.425430 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:10.425337 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7b299" podUID="cb704828-9d72-448f-8256-1dda6f6273ea" Apr 24 14:24:10.425430 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:10.425392 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ld8rd" podUID="62277dce-4b78-4158-9951-1292c0fa443c" Apr 24 14:24:10.425430 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:10.425219 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-wz4gq" Apr 24 14:24:10.425630 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:10.425466 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-wz4gq" podUID="0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b" Apr 24 14:24:10.615309 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:10.615276 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jvbnt" event={"ID":"0d6bd978-a62b-4e69-9786-a9b7774d09db","Type":"ContainerStarted","Data":"80749186b77beba39834deccdded3571df3517df9af2b3dd95f1ca9b88345a6f"} Apr 24 14:24:11.618612 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:11.618577 2570 generic.go:358] "Generic (PLEG): container finished" podID="0d6bd978-a62b-4e69-9786-a9b7774d09db" containerID="80749186b77beba39834deccdded3571df3517df9af2b3dd95f1ca9b88345a6f" exitCode=0 Apr 24 14:24:11.619024 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:11.618645 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jvbnt" event={"ID":"0d6bd978-a62b-4e69-9786-a9b7774d09db","Type":"ContainerDied","Data":"80749186b77beba39834deccdded3571df3517df9af2b3dd95f1ca9b88345a6f"} Apr 24 14:24:12.425641 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:12.425608 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7b299" Apr 24 14:24:12.425840 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:12.425717 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-wz4gq" Apr 24 14:24:12.425840 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:12.425769 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ld8rd" Apr 24 14:24:12.425840 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:12.425769 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7b299" podUID="cb704828-9d72-448f-8256-1dda6f6273ea" Apr 24 14:24:12.426020 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:12.425855 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ld8rd" podUID="62277dce-4b78-4158-9951-1292c0fa443c" Apr 24 14:24:12.426020 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:12.425962 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-wz4gq" podUID="0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b" Apr 24 14:24:12.861733 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:12.861671 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:24:12.862223 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:12.861981 2570 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 14:24:12.882231 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:12.882194 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dz989" Apr 24 14:24:13.752125 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.752088 2570 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-34.ec2.internal" event="NodeReady" Apr 24 14:24:13.752304 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.752233 2570 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Apr 24 14:24:13.785184 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.785155 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-54748d8fcc-2q79c"] Apr 24 14:24:13.813528 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.813500 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-fcd55b66b-r86xn"] Apr 24 14:24:13.813686 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.813644 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-54748d8fcc-2q79c" Apr 24 14:24:13.816402 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.816375 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"open-cluster-management-image-pull-credentials\"" Apr 24 14:24:13.817361 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.816829 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"managed-serviceaccount-hub-kubeconfig\"" Apr 24 14:24:13.817361 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.817065 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"open-cluster-management-agent-addon\"/\"kube-root-ca.crt\"" Apr 24 14:24:13.817361 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.817236 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"open-cluster-management-agent-addon\"/\"openshift-service-ca.crt\"" Apr 24 14:24:13.817361 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.817279 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"managed-serviceaccount-dockercfg-6kc7h\"" Apr 24 14:24:13.832094 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.832064 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["open-cluster-management-agent-addon/cluster-proxy-proxy-agent-c8d9cff7c-7mz9c"] Apr 24 14:24:13.832245 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.832227 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:24:13.834806 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.834759 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Apr 24 14:24:13.834940 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.834907 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-xtmb2\"" Apr 24 14:24:13.835016 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.834940 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-private-configuration\"" Apr 24 14:24:13.835016 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.834946 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Apr 24 14:24:13.840843 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.840823 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Apr 24 14:24:13.865496 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.865471 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-7495cb5474-jpfv4"] Apr 24 14:24:13.865915 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.865601 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-c8d9cff7c-7mz9c" Apr 24 14:24:13.868654 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.868562 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-ca\"" Apr 24 14:24:13.868654 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.868579 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-hub-kubeconfig\"" Apr 24 14:24:13.868878 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.868853 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-open-cluster-management.io-proxy-agent-signer-client-cert\"" Apr 24 14:24:13.868985 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.868902 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-service-proxy-server-certificates\"" Apr 24 14:24:13.904053 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.904028 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-54748d8fcc-2q79c"] Apr 24 14:24:13.904214 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.904059 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-7495cb5474-jpfv4"] Apr 24 14:24:13.904214 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.904076 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-9wh5p"] Apr 24 14:24:13.904328 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.904211 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-7495cb5474-jpfv4" Apr 24 14:24:13.906841 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.906819 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"work-manager-hub-kubeconfig\"" Apr 24 14:24:13.922870 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.922848 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/cluster-proxy-proxy-agent-c8d9cff7c-7mz9c"] Apr 24 14:24:13.922993 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.922876 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-fcd55b66b-r86xn"] Apr 24 14:24:13.922993 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.922888 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9wh5p"] Apr 24 14:24:13.922993 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.922914 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-t8gtr"] Apr 24 14:24:13.922993 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.922944 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9wh5p" Apr 24 14:24:13.925838 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.925606 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-84zkd\"" Apr 24 14:24:13.925838 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.925612 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Apr 24 14:24:13.925838 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.925697 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Apr 24 14:24:13.925838 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.925744 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Apr 24 14:24:13.940544 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.940521 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-t8gtr"] Apr 24 14:24:13.940654 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.940642 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-t8gtr" Apr 24 14:24:13.943284 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.943185 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-m8ml5\"" Apr 24 14:24:13.943284 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.943190 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Apr 24 14:24:13.943448 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.943340 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Apr 24 14:24:13.974860 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.974838 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/d644d2fe-b7f5-48f0-9538-6e02bc1e7203-hub-kubeconfig\") pod \"managed-serviceaccount-addon-agent-54748d8fcc-2q79c\" (UID: \"d644d2fe-b7f5-48f0-9538-6e02bc1e7203\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-54748d8fcc-2q79c" Apr 24 14:24:13.974957 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.974868 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hub\" (UniqueName: \"kubernetes.io/secret/f618f048-77b0-41f7-a1c7-f3a7816c9456-hub\") pod \"cluster-proxy-proxy-agent-c8d9cff7c-7mz9c\" (UID: \"f618f048-77b0-41f7-a1c7-f3a7816c9456\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-c8d9cff7c-7mz9c" Apr 24 14:24:13.974957 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.974887 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ocpservice-ca\" (UniqueName: \"kubernetes.io/configmap/f618f048-77b0-41f7-a1c7-f3a7816c9456-ocpservice-ca\") pod \"cluster-proxy-proxy-agent-c8d9cff7c-7mz9c\" (UID: \"f618f048-77b0-41f7-a1c7-f3a7816c9456\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-c8d9cff7c-7mz9c" Apr 24 14:24:13.975032 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.974959 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0123d5b7-b24f-4266-a1e2-30653ee3b093-ca-trust-extracted\") pod \"image-registry-fcd55b66b-r86xn\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:24:13.975032 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.974996 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0123d5b7-b24f-4266-a1e2-30653ee3b093-trusted-ca\") pod \"image-registry-fcd55b66b-r86xn\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:24:13.975032 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.975023 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crt98\" (UniqueName: \"kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-kube-api-access-crt98\") pod \"image-registry-fcd55b66b-r86xn\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:24:13.975237 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.975046 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-proxy-server-cert\" (UniqueName: \"kubernetes.io/secret/f618f048-77b0-41f7-a1c7-f3a7816c9456-service-proxy-server-cert\") pod \"cluster-proxy-proxy-agent-c8d9cff7c-7mz9c\" (UID: \"f618f048-77b0-41f7-a1c7-f3a7816c9456\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-c8d9cff7c-7mz9c" Apr 24 14:24:13.975237 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.975070 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jbv5\" (UniqueName: \"kubernetes.io/projected/f618f048-77b0-41f7-a1c7-f3a7816c9456-kube-api-access-6jbv5\") pod \"cluster-proxy-proxy-agent-c8d9cff7c-7mz9c\" (UID: \"f618f048-77b0-41f7-a1c7-f3a7816c9456\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-c8d9cff7c-7mz9c" Apr 24 14:24:13.975237 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.975209 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/0123d5b7-b24f-4266-a1e2-30653ee3b093-image-registry-private-configuration\") pod \"image-registry-fcd55b66b-r86xn\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:24:13.975328 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.975263 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-registry-tls\") pod \"image-registry-fcd55b66b-r86xn\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:24:13.975328 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.975294 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0123d5b7-b24f-4266-a1e2-30653ee3b093-registry-certificates\") pod \"image-registry-fcd55b66b-r86xn\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:24:13.975328 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.975322 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cdce2a9f-fb35-4e60-b7d0-3616323b4f4a-tmp\") pod \"klusterlet-addon-workmgr-7495cb5474-jpfv4\" (UID: \"cdce2a9f-fb35-4e60-b7d0-3616323b4f4a\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-7495cb5474-jpfv4" Apr 24 14:24:13.975444 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.975345 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/cdce2a9f-fb35-4e60-b7d0-3616323b4f4a-klusterlet-config\") pod \"klusterlet-addon-workmgr-7495cb5474-jpfv4\" (UID: \"cdce2a9f-fb35-4e60-b7d0-3616323b4f4a\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-7495cb5474-jpfv4" Apr 24 14:24:13.975444 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.975375 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-bound-sa-token\") pod \"image-registry-fcd55b66b-r86xn\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:24:13.975444 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.975403 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2wl6\" (UniqueName: \"kubernetes.io/projected/d644d2fe-b7f5-48f0-9538-6e02bc1e7203-kube-api-access-z2wl6\") pod \"managed-serviceaccount-addon-agent-54748d8fcc-2q79c\" (UID: \"d644d2fe-b7f5-48f0-9538-6e02bc1e7203\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-54748d8fcc-2q79c" Apr 24 14:24:13.975444 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.975429 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/f618f048-77b0-41f7-a1c7-f3a7816c9456-hub-kubeconfig\") pod \"cluster-proxy-proxy-agent-c8d9cff7c-7mz9c\" (UID: \"f618f048-77b0-41f7-a1c7-f3a7816c9456\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-c8d9cff7c-7mz9c" Apr 24 14:24:13.975633 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.975452 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca\" (UniqueName: \"kubernetes.io/secret/f618f048-77b0-41f7-a1c7-f3a7816c9456-ca\") pod \"cluster-proxy-proxy-agent-c8d9cff7c-7mz9c\" (UID: \"f618f048-77b0-41f7-a1c7-f3a7816c9456\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-c8d9cff7c-7mz9c" Apr 24 14:24:13.975633 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.975480 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0123d5b7-b24f-4266-a1e2-30653ee3b093-installation-pull-secrets\") pod \"image-registry-fcd55b66b-r86xn\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:24:13.975633 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:13.975536 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8hk9\" (UniqueName: \"kubernetes.io/projected/cdce2a9f-fb35-4e60-b7d0-3616323b4f4a-kube-api-access-c8hk9\") pod \"klusterlet-addon-workmgr-7495cb5474-jpfv4\" (UID: \"cdce2a9f-fb35-4e60-b7d0-3616323b4f4a\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-7495cb5474-jpfv4" Apr 24 14:24:14.076323 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.076236 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/d644d2fe-b7f5-48f0-9538-6e02bc1e7203-hub-kubeconfig\") pod \"managed-serviceaccount-addon-agent-54748d8fcc-2q79c\" (UID: \"d644d2fe-b7f5-48f0-9538-6e02bc1e7203\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-54748d8fcc-2q79c" Apr 24 14:24:14.076323 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.076289 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hub\" (UniqueName: \"kubernetes.io/secret/f618f048-77b0-41f7-a1c7-f3a7816c9456-hub\") pod \"cluster-proxy-proxy-agent-c8d9cff7c-7mz9c\" (UID: \"f618f048-77b0-41f7-a1c7-f3a7816c9456\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-c8d9cff7c-7mz9c" Apr 24 14:24:14.076323 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.076317 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ocpservice-ca\" (UniqueName: \"kubernetes.io/configmap/f618f048-77b0-41f7-a1c7-f3a7816c9456-ocpservice-ca\") pod \"cluster-proxy-proxy-agent-c8d9cff7c-7mz9c\" (UID: \"f618f048-77b0-41f7-a1c7-f3a7816c9456\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-c8d9cff7c-7mz9c" Apr 24 14:24:14.076582 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.076349 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0123d5b7-b24f-4266-a1e2-30653ee3b093-ca-trust-extracted\") pod \"image-registry-fcd55b66b-r86xn\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:24:14.076582 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.076374 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0123d5b7-b24f-4266-a1e2-30653ee3b093-trusted-ca\") pod \"image-registry-fcd55b66b-r86xn\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:24:14.076582 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.076396 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-crt98\" (UniqueName: \"kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-kube-api-access-crt98\") pod \"image-registry-fcd55b66b-r86xn\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:24:14.076582 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.076427 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-proxy-server-cert\" (UniqueName: \"kubernetes.io/secret/f618f048-77b0-41f7-a1c7-f3a7816c9456-service-proxy-server-cert\") pod \"cluster-proxy-proxy-agent-c8d9cff7c-7mz9c\" (UID: \"f618f048-77b0-41f7-a1c7-f3a7816c9456\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-c8d9cff7c-7mz9c" Apr 24 14:24:14.076582 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.076451 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6jbv5\" (UniqueName: \"kubernetes.io/projected/f618f048-77b0-41f7-a1c7-f3a7816c9456-kube-api-access-6jbv5\") pod \"cluster-proxy-proxy-agent-c8d9cff7c-7mz9c\" (UID: \"f618f048-77b0-41f7-a1c7-f3a7816c9456\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-c8d9cff7c-7mz9c" Apr 24 14:24:14.076582 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.076481 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c8938f71-608b-4cd4-ae9c-3fee7fdcb899-metrics-tls\") pod \"dns-default-t8gtr\" (UID: \"c8938f71-608b-4cd4-ae9c-3fee7fdcb899\") " pod="openshift-dns/dns-default-t8gtr" Apr 24 14:24:14.076582 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.076525 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/0123d5b7-b24f-4266-a1e2-30653ee3b093-image-registry-private-configuration\") pod \"image-registry-fcd55b66b-r86xn\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:24:14.076582 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.076557 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-registry-tls\") pod \"image-registry-fcd55b66b-r86xn\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:24:14.076582 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.076581 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0123d5b7-b24f-4266-a1e2-30653ee3b093-registry-certificates\") pod \"image-registry-fcd55b66b-r86xn\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:24:14.076950 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.076604 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cdce2a9f-fb35-4e60-b7d0-3616323b4f4a-tmp\") pod \"klusterlet-addon-workmgr-7495cb5474-jpfv4\" (UID: \"cdce2a9f-fb35-4e60-b7d0-3616323b4f4a\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-7495cb5474-jpfv4" Apr 24 14:24:14.076950 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.076639 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/cdce2a9f-fb35-4e60-b7d0-3616323b4f4a-klusterlet-config\") pod \"klusterlet-addon-workmgr-7495cb5474-jpfv4\" (UID: \"cdce2a9f-fb35-4e60-b7d0-3616323b4f4a\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-7495cb5474-jpfv4" Apr 24 14:24:14.076950 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.076669 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-bound-sa-token\") pod \"image-registry-fcd55b66b-r86xn\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:24:14.076950 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.076685 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65-cert\") pod \"ingress-canary-9wh5p\" (UID: \"fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65\") " pod="openshift-ingress-canary/ingress-canary-9wh5p" Apr 24 14:24:14.076950 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.076699 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c8938f71-608b-4cd4-ae9c-3fee7fdcb899-tmp-dir\") pod \"dns-default-t8gtr\" (UID: \"c8938f71-608b-4cd4-ae9c-3fee7fdcb899\") " pod="openshift-dns/dns-default-t8gtr" Apr 24 14:24:14.076950 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.076725 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z2wl6\" (UniqueName: \"kubernetes.io/projected/d644d2fe-b7f5-48f0-9538-6e02bc1e7203-kube-api-access-z2wl6\") pod \"managed-serviceaccount-addon-agent-54748d8fcc-2q79c\" (UID: \"d644d2fe-b7f5-48f0-9538-6e02bc1e7203\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-54748d8fcc-2q79c" Apr 24 14:24:14.076950 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.076755 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/f618f048-77b0-41f7-a1c7-f3a7816c9456-hub-kubeconfig\") pod \"cluster-proxy-proxy-agent-c8d9cff7c-7mz9c\" (UID: \"f618f048-77b0-41f7-a1c7-f3a7816c9456\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-c8d9cff7c-7mz9c" Apr 24 14:24:14.076950 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.076770 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h9ws\" (UniqueName: \"kubernetes.io/projected/fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65-kube-api-access-6h9ws\") pod \"ingress-canary-9wh5p\" (UID: \"fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65\") " pod="openshift-ingress-canary/ingress-canary-9wh5p" Apr 24 14:24:14.076950 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.076793 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca\" (UniqueName: \"kubernetes.io/secret/f618f048-77b0-41f7-a1c7-f3a7816c9456-ca\") pod \"cluster-proxy-proxy-agent-c8d9cff7c-7mz9c\" (UID: \"f618f048-77b0-41f7-a1c7-f3a7816c9456\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-c8d9cff7c-7mz9c" Apr 24 14:24:14.076950 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.076824 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0123d5b7-b24f-4266-a1e2-30653ee3b093-installation-pull-secrets\") pod \"image-registry-fcd55b66b-r86xn\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:24:14.077399 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.076861 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8938f71-608b-4cd4-ae9c-3fee7fdcb899-config-volume\") pod \"dns-default-t8gtr\" (UID: \"c8938f71-608b-4cd4-ae9c-3fee7fdcb899\") " pod="openshift-dns/dns-default-t8gtr" Apr 24 14:24:14.077399 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.077046 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmslk\" (UniqueName: \"kubernetes.io/projected/c8938f71-608b-4cd4-ae9c-3fee7fdcb899-kube-api-access-hmslk\") pod \"dns-default-t8gtr\" (UID: \"c8938f71-608b-4cd4-ae9c-3fee7fdcb899\") " pod="openshift-dns/dns-default-t8gtr" Apr 24 14:24:14.077399 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:14.077054 2570 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 24 14:24:14.077399 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:14.077072 2570 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-fcd55b66b-r86xn: secret "image-registry-tls" not found Apr 24 14:24:14.077399 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.077079 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c8hk9\" (UniqueName: \"kubernetes.io/projected/cdce2a9f-fb35-4e60-b7d0-3616323b4f4a-kube-api-access-c8hk9\") pod \"klusterlet-addon-workmgr-7495cb5474-jpfv4\" (UID: \"cdce2a9f-fb35-4e60-b7d0-3616323b4f4a\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-7495cb5474-jpfv4" Apr 24 14:24:14.077399 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:14.077149 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-registry-tls podName:0123d5b7-b24f-4266-a1e2-30653ee3b093 nodeName:}" failed. No retries permitted until 2026-04-24 14:24:14.577127548 +0000 UTC m=+32.702531356 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-registry-tls") pod "image-registry-fcd55b66b-r86xn" (UID: "0123d5b7-b24f-4266-a1e2-30653ee3b093") : secret "image-registry-tls" not found Apr 24 14:24:14.077399 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.077340 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cdce2a9f-fb35-4e60-b7d0-3616323b4f4a-tmp\") pod \"klusterlet-addon-workmgr-7495cb5474-jpfv4\" (UID: \"cdce2a9f-fb35-4e60-b7d0-3616323b4f4a\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-7495cb5474-jpfv4" Apr 24 14:24:14.078126 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.077946 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0123d5b7-b24f-4266-a1e2-30653ee3b093-registry-certificates\") pod \"image-registry-fcd55b66b-r86xn\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:24:14.078126 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.078119 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0123d5b7-b24f-4266-a1e2-30653ee3b093-ca-trust-extracted\") pod \"image-registry-fcd55b66b-r86xn\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:24:14.078830 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.078753 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ocpservice-ca\" (UniqueName: \"kubernetes.io/configmap/f618f048-77b0-41f7-a1c7-f3a7816c9456-ocpservice-ca\") pod \"cluster-proxy-proxy-agent-c8d9cff7c-7mz9c\" (UID: \"f618f048-77b0-41f7-a1c7-f3a7816c9456\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-c8d9cff7c-7mz9c" Apr 24 14:24:14.083785 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.083354 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0123d5b7-b24f-4266-a1e2-30653ee3b093-trusted-ca\") pod \"image-registry-fcd55b66b-r86xn\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:24:14.085598 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.084242 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/d644d2fe-b7f5-48f0-9538-6e02bc1e7203-hub-kubeconfig\") pod \"managed-serviceaccount-addon-agent-54748d8fcc-2q79c\" (UID: \"d644d2fe-b7f5-48f0-9538-6e02bc1e7203\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-54748d8fcc-2q79c" Apr 24 14:24:14.085598 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.084602 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca\" (UniqueName: \"kubernetes.io/secret/f618f048-77b0-41f7-a1c7-f3a7816c9456-ca\") pod \"cluster-proxy-proxy-agent-c8d9cff7c-7mz9c\" (UID: \"f618f048-77b0-41f7-a1c7-f3a7816c9456\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-c8d9cff7c-7mz9c" Apr 24 14:24:14.085598 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.084628 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/0123d5b7-b24f-4266-a1e2-30653ee3b093-image-registry-private-configuration\") pod \"image-registry-fcd55b66b-r86xn\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:24:14.085598 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.084979 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0123d5b7-b24f-4266-a1e2-30653ee3b093-installation-pull-secrets\") pod \"image-registry-fcd55b66b-r86xn\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:24:14.085598 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.085147 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/cdce2a9f-fb35-4e60-b7d0-3616323b4f4a-klusterlet-config\") pod \"klusterlet-addon-workmgr-7495cb5474-jpfv4\" (UID: \"cdce2a9f-fb35-4e60-b7d0-3616323b4f4a\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-7495cb5474-jpfv4" Apr 24 14:24:14.085598 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.085585 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-proxy-server-cert\" (UniqueName: \"kubernetes.io/secret/f618f048-77b0-41f7-a1c7-f3a7816c9456-service-proxy-server-cert\") pod \"cluster-proxy-proxy-agent-c8d9cff7c-7mz9c\" (UID: \"f618f048-77b0-41f7-a1c7-f3a7816c9456\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-c8d9cff7c-7mz9c" Apr 24 14:24:14.086729 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.086704 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/f618f048-77b0-41f7-a1c7-f3a7816c9456-hub-kubeconfig\") pod \"cluster-proxy-proxy-agent-c8d9cff7c-7mz9c\" (UID: \"f618f048-77b0-41f7-a1c7-f3a7816c9456\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-c8d9cff7c-7mz9c" Apr 24 14:24:14.087411 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.087388 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hub\" (UniqueName: \"kubernetes.io/secret/f618f048-77b0-41f7-a1c7-f3a7816c9456-hub\") pod \"cluster-proxy-proxy-agent-c8d9cff7c-7mz9c\" (UID: \"f618f048-77b0-41f7-a1c7-f3a7816c9456\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-c8d9cff7c-7mz9c" Apr 24 14:24:14.088897 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.087786 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-bound-sa-token\") pod \"image-registry-fcd55b66b-r86xn\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:24:14.089463 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.089439 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8hk9\" (UniqueName: \"kubernetes.io/projected/cdce2a9f-fb35-4e60-b7d0-3616323b4f4a-kube-api-access-c8hk9\") pod \"klusterlet-addon-workmgr-7495cb5474-jpfv4\" (UID: \"cdce2a9f-fb35-4e60-b7d0-3616323b4f4a\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-7495cb5474-jpfv4" Apr 24 14:24:14.089724 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.089701 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2wl6\" (UniqueName: \"kubernetes.io/projected/d644d2fe-b7f5-48f0-9538-6e02bc1e7203-kube-api-access-z2wl6\") pod \"managed-serviceaccount-addon-agent-54748d8fcc-2q79c\" (UID: \"d644d2fe-b7f5-48f0-9538-6e02bc1e7203\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-54748d8fcc-2q79c" Apr 24 14:24:14.091166 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.091142 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jbv5\" (UniqueName: \"kubernetes.io/projected/f618f048-77b0-41f7-a1c7-f3a7816c9456-kube-api-access-6jbv5\") pod \"cluster-proxy-proxy-agent-c8d9cff7c-7mz9c\" (UID: \"f618f048-77b0-41f7-a1c7-f3a7816c9456\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-c8d9cff7c-7mz9c" Apr 24 14:24:14.091473 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.091458 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-crt98\" (UniqueName: \"kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-kube-api-access-crt98\") pod \"image-registry-fcd55b66b-r86xn\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:24:14.134782 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.134763 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-54748d8fcc-2q79c" Apr 24 14:24:14.175875 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.175843 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-c8d9cff7c-7mz9c" Apr 24 14:24:14.178814 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.177774 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65-cert\") pod \"ingress-canary-9wh5p\" (UID: \"fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65\") " pod="openshift-ingress-canary/ingress-canary-9wh5p" Apr 24 14:24:14.178814 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.177807 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c8938f71-608b-4cd4-ae9c-3fee7fdcb899-tmp-dir\") pod \"dns-default-t8gtr\" (UID: \"c8938f71-608b-4cd4-ae9c-3fee7fdcb899\") " pod="openshift-dns/dns-default-t8gtr" Apr 24 14:24:14.178814 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:14.177935 2570 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 24 14:24:14.178814 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:14.178004 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65-cert podName:fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65 nodeName:}" failed. No retries permitted until 2026-04-24 14:24:14.677982053 +0000 UTC m=+32.803385857 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65-cert") pod "ingress-canary-9wh5p" (UID: "fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65") : secret "canary-serving-cert" not found Apr 24 14:24:14.178814 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.178310 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6h9ws\" (UniqueName: \"kubernetes.io/projected/fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65-kube-api-access-6h9ws\") pod \"ingress-canary-9wh5p\" (UID: \"fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65\") " pod="openshift-ingress-canary/ingress-canary-9wh5p" Apr 24 14:24:14.178814 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.178356 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8938f71-608b-4cd4-ae9c-3fee7fdcb899-config-volume\") pod \"dns-default-t8gtr\" (UID: \"c8938f71-608b-4cd4-ae9c-3fee7fdcb899\") " pod="openshift-dns/dns-default-t8gtr" Apr 24 14:24:14.178814 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.178381 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hmslk\" (UniqueName: \"kubernetes.io/projected/c8938f71-608b-4cd4-ae9c-3fee7fdcb899-kube-api-access-hmslk\") pod \"dns-default-t8gtr\" (UID: \"c8938f71-608b-4cd4-ae9c-3fee7fdcb899\") " pod="openshift-dns/dns-default-t8gtr" Apr 24 14:24:14.178814 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.178471 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c8938f71-608b-4cd4-ae9c-3fee7fdcb899-metrics-tls\") pod \"dns-default-t8gtr\" (UID: \"c8938f71-608b-4cd4-ae9c-3fee7fdcb899\") " pod="openshift-dns/dns-default-t8gtr" Apr 24 14:24:14.178814 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:14.178587 2570 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 24 14:24:14.178814 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:14.178631 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8938f71-608b-4cd4-ae9c-3fee7fdcb899-metrics-tls podName:c8938f71-608b-4cd4-ae9c-3fee7fdcb899 nodeName:}" failed. No retries permitted until 2026-04-24 14:24:14.678616846 +0000 UTC m=+32.804020640 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c8938f71-608b-4cd4-ae9c-3fee7fdcb899-metrics-tls") pod "dns-default-t8gtr" (UID: "c8938f71-608b-4cd4-ae9c-3fee7fdcb899") : secret "dns-default-metrics-tls" not found Apr 24 14:24:14.179476 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.178955 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8938f71-608b-4cd4-ae9c-3fee7fdcb899-config-volume\") pod \"dns-default-t8gtr\" (UID: \"c8938f71-608b-4cd4-ae9c-3fee7fdcb899\") " pod="openshift-dns/dns-default-t8gtr" Apr 24 14:24:14.179797 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.179776 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c8938f71-608b-4cd4-ae9c-3fee7fdcb899-tmp-dir\") pod \"dns-default-t8gtr\" (UID: \"c8938f71-608b-4cd4-ae9c-3fee7fdcb899\") " pod="openshift-dns/dns-default-t8gtr" Apr 24 14:24:14.187463 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.187398 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmslk\" (UniqueName: \"kubernetes.io/projected/c8938f71-608b-4cd4-ae9c-3fee7fdcb899-kube-api-access-hmslk\") pod \"dns-default-t8gtr\" (UID: \"c8938f71-608b-4cd4-ae9c-3fee7fdcb899\") " pod="openshift-dns/dns-default-t8gtr" Apr 24 14:24:14.187906 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.187889 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h9ws\" (UniqueName: \"kubernetes.io/projected/fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65-kube-api-access-6h9ws\") pod \"ingress-canary-9wh5p\" (UID: \"fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65\") " pod="openshift-ingress-canary/ingress-canary-9wh5p" Apr 24 14:24:14.225466 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.225437 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-7495cb5474-jpfv4" Apr 24 14:24:14.332347 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.332300 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/cluster-proxy-proxy-agent-c8d9cff7c-7mz9c"] Apr 24 14:24:14.332429 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.332352 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-54748d8fcc-2q79c"] Apr 24 14:24:14.334874 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:24:14.334840 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd644d2fe_b7f5_48f0_9538_6e02bc1e7203.slice/crio-cf594395ddf60025861ae43d3fba12681abcdaf9263239b294710297674f2f66 WatchSource:0}: Error finding container cf594395ddf60025861ae43d3fba12681abcdaf9263239b294710297674f2f66: Status 404 returned error can't find the container with id cf594395ddf60025861ae43d3fba12681abcdaf9263239b294710297674f2f66 Apr 24 14:24:14.337348 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:24:14.337241 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf618f048_77b0_41f7_a1c7_f3a7816c9456.slice/crio-d2d1e494e061e4f8161645721bc0f69cca28dae43407c85abd6d799e0928f197 WatchSource:0}: Error finding container d2d1e494e061e4f8161645721bc0f69cca28dae43407c85abd6d799e0928f197: Status 404 returned error can't find the container with id d2d1e494e061e4f8161645721bc0f69cca28dae43407c85abd6d799e0928f197 Apr 24 14:24:14.380020 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.379983 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-7495cb5474-jpfv4"] Apr 24 14:24:14.384518 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:24:14.384489 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdce2a9f_fb35_4e60_b7d0_3616323b4f4a.slice/crio-a38f4c0b22b8fedbffb5140d9ddd6b1db4c6bee6a3460d87e51cc3ff9fb12758 WatchSource:0}: Error finding container a38f4c0b22b8fedbffb5140d9ddd6b1db4c6bee6a3460d87e51cc3ff9fb12758: Status 404 returned error can't find the container with id a38f4c0b22b8fedbffb5140d9ddd6b1db4c6bee6a3460d87e51cc3ff9fb12758 Apr 24 14:24:14.424786 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.424760 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ld8rd" Apr 24 14:24:14.424929 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.424760 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-wz4gq" Apr 24 14:24:14.424985 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.424760 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7b299" Apr 24 14:24:14.428059 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.427809 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 24 14:24:14.428059 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.427838 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 24 14:24:14.428248 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.428079 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"original-pull-secret\"" Apr 24 14:24:14.428248 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.428192 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-pdcbb\"" Apr 24 14:24:14.428248 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.428215 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-68j6s\"" Apr 24 14:24:14.428377 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.428321 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 24 14:24:14.582008 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.581918 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-registry-tls\") pod \"image-registry-fcd55b66b-r86xn\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:24:14.582182 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:14.582049 2570 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 24 14:24:14.582182 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:14.582065 2570 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-fcd55b66b-r86xn: secret "image-registry-tls" not found Apr 24 14:24:14.582182 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:14.582135 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-registry-tls podName:0123d5b7-b24f-4266-a1e2-30653ee3b093 nodeName:}" failed. No retries permitted until 2026-04-24 14:24:15.582120053 +0000 UTC m=+33.707523846 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-registry-tls") pod "image-registry-fcd55b66b-r86xn" (UID: "0123d5b7-b24f-4266-a1e2-30653ee3b093") : secret "image-registry-tls" not found Apr 24 14:24:14.625958 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.625903 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-7495cb5474-jpfv4" event={"ID":"cdce2a9f-fb35-4e60-b7d0-3616323b4f4a","Type":"ContainerStarted","Data":"a38f4c0b22b8fedbffb5140d9ddd6b1db4c6bee6a3460d87e51cc3ff9fb12758"} Apr 24 14:24:14.627038 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.627008 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-54748d8fcc-2q79c" event={"ID":"d644d2fe-b7f5-48f0-9538-6e02bc1e7203","Type":"ContainerStarted","Data":"cf594395ddf60025861ae43d3fba12681abcdaf9263239b294710297674f2f66"} Apr 24 14:24:14.628047 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.628020 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-c8d9cff7c-7mz9c" event={"ID":"f618f048-77b0-41f7-a1c7-f3a7816c9456","Type":"ContainerStarted","Data":"d2d1e494e061e4f8161645721bc0f69cca28dae43407c85abd6d799e0928f197"} Apr 24 14:24:14.682542 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.682492 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c8938f71-608b-4cd4-ae9c-3fee7fdcb899-metrics-tls\") pod \"dns-default-t8gtr\" (UID: \"c8938f71-608b-4cd4-ae9c-3fee7fdcb899\") " pod="openshift-dns/dns-default-t8gtr" Apr 24 14:24:14.682727 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:14.682570 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65-cert\") pod \"ingress-canary-9wh5p\" (UID: \"fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65\") " pod="openshift-ingress-canary/ingress-canary-9wh5p" Apr 24 14:24:14.682727 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:14.682718 2570 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 24 14:24:14.682848 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:14.682718 2570 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 24 14:24:14.682848 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:14.682798 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65-cert podName:fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65 nodeName:}" failed. No retries permitted until 2026-04-24 14:24:15.682777387 +0000 UTC m=+33.808181185 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65-cert") pod "ingress-canary-9wh5p" (UID: "fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65") : secret "canary-serving-cert" not found Apr 24 14:24:14.682848 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:14.682822 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8938f71-608b-4cd4-ae9c-3fee7fdcb899-metrics-tls podName:c8938f71-608b-4cd4-ae9c-3fee7fdcb899 nodeName:}" failed. No retries permitted until 2026-04-24 14:24:15.682805845 +0000 UTC m=+33.808209642 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c8938f71-608b-4cd4-ae9c-3fee7fdcb899-metrics-tls") pod "dns-default-t8gtr" (UID: "c8938f71-608b-4cd4-ae9c-3fee7fdcb899") : secret "dns-default-metrics-tls" not found Apr 24 14:24:15.085667 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:15.085628 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/62277dce-4b78-4158-9951-1292c0fa443c-metrics-certs\") pod \"network-metrics-daemon-ld8rd\" (UID: \"62277dce-4b78-4158-9951-1292c0fa443c\") " pod="openshift-multus/network-metrics-daemon-ld8rd" Apr 24 14:24:15.086400 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:15.085788 2570 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Apr 24 14:24:15.086400 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:15.085871 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62277dce-4b78-4158-9951-1292c0fa443c-metrics-certs podName:62277dce-4b78-4158-9951-1292c0fa443c nodeName:}" failed. No retries permitted until 2026-04-24 14:24:47.085850048 +0000 UTC m=+65.211253843 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/62277dce-4b78-4158-9951-1292c0fa443c-metrics-certs") pod "network-metrics-daemon-ld8rd" (UID: "62277dce-4b78-4158-9951-1292c0fa443c") : secret "metrics-daemon-secret" not found Apr 24 14:24:15.187333 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:15.186959 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-68rcm\" (UniqueName: \"kubernetes.io/projected/cb704828-9d72-448f-8256-1dda6f6273ea-kube-api-access-68rcm\") pod \"network-check-target-7b299\" (UID: \"cb704828-9d72-448f-8256-1dda6f6273ea\") " pod="openshift-network-diagnostics/network-check-target-7b299" Apr 24 14:24:15.209896 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:15.209864 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-68rcm\" (UniqueName: \"kubernetes.io/projected/cb704828-9d72-448f-8256-1dda6f6273ea-kube-api-access-68rcm\") pod \"network-check-target-7b299\" (UID: \"cb704828-9d72-448f-8256-1dda6f6273ea\") " pod="openshift-network-diagnostics/network-check-target-7b299" Apr 24 14:24:15.351592 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:15.351478 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7b299" Apr 24 14:24:15.590771 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:15.590736 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-registry-tls\") pod \"image-registry-fcd55b66b-r86xn\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:24:15.591020 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:15.590931 2570 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 24 14:24:15.591020 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:15.590950 2570 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-fcd55b66b-r86xn: secret "image-registry-tls" not found Apr 24 14:24:15.591020 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:15.591009 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-registry-tls podName:0123d5b7-b24f-4266-a1e2-30653ee3b093 nodeName:}" failed. No retries permitted until 2026-04-24 14:24:17.590990659 +0000 UTC m=+35.716394467 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-registry-tls") pod "image-registry-fcd55b66b-r86xn" (UID: "0123d5b7-b24f-4266-a1e2-30653ee3b093") : secret "image-registry-tls" not found Apr 24 14:24:15.692599 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:15.691920 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65-cert\") pod \"ingress-canary-9wh5p\" (UID: \"fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65\") " pod="openshift-ingress-canary/ingress-canary-9wh5p" Apr 24 14:24:15.692599 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:15.692028 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c8938f71-608b-4cd4-ae9c-3fee7fdcb899-metrics-tls\") pod \"dns-default-t8gtr\" (UID: \"c8938f71-608b-4cd4-ae9c-3fee7fdcb899\") " pod="openshift-dns/dns-default-t8gtr" Apr 24 14:24:15.692599 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:15.692193 2570 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 24 14:24:15.692599 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:15.692265 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8938f71-608b-4cd4-ae9c-3fee7fdcb899-metrics-tls podName:c8938f71-608b-4cd4-ae9c-3fee7fdcb899 nodeName:}" failed. No retries permitted until 2026-04-24 14:24:17.692244655 +0000 UTC m=+35.817648491 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c8938f71-608b-4cd4-ae9c-3fee7fdcb899-metrics-tls") pod "dns-default-t8gtr" (UID: "c8938f71-608b-4cd4-ae9c-3fee7fdcb899") : secret "dns-default-metrics-tls" not found Apr 24 14:24:15.692599 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:15.692267 2570 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 24 14:24:15.692599 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:15.692329 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65-cert podName:fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65 nodeName:}" failed. No retries permitted until 2026-04-24 14:24:17.692311287 +0000 UTC m=+35.817715091 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65-cert") pod "ingress-canary-9wh5p" (UID: "fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65") : secret "canary-serving-cert" not found Apr 24 14:24:17.609854 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:17.609624 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-registry-tls\") pod \"image-registry-fcd55b66b-r86xn\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:24:17.610368 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:17.609785 2570 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 24 14:24:17.610368 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:17.609935 2570 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-fcd55b66b-r86xn: secret "image-registry-tls" not found Apr 24 14:24:17.610368 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:17.609989 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-registry-tls podName:0123d5b7-b24f-4266-a1e2-30653ee3b093 nodeName:}" failed. No retries permitted until 2026-04-24 14:24:21.609974902 +0000 UTC m=+39.735378694 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-registry-tls") pod "image-registry-fcd55b66b-r86xn" (UID: "0123d5b7-b24f-4266-a1e2-30653ee3b093") : secret "image-registry-tls" not found Apr 24 14:24:17.710435 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:17.710403 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c8938f71-608b-4cd4-ae9c-3fee7fdcb899-metrics-tls\") pod \"dns-default-t8gtr\" (UID: \"c8938f71-608b-4cd4-ae9c-3fee7fdcb899\") " pod="openshift-dns/dns-default-t8gtr" Apr 24 14:24:17.710607 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:17.710456 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65-cert\") pod \"ingress-canary-9wh5p\" (UID: \"fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65\") " pod="openshift-ingress-canary/ingress-canary-9wh5p" Apr 24 14:24:17.710607 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:17.710565 2570 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 24 14:24:17.710715 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:17.710609 2570 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 24 14:24:17.710715 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:17.710627 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65-cert podName:fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65 nodeName:}" failed. No retries permitted until 2026-04-24 14:24:21.710609102 +0000 UTC m=+39.836012912 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65-cert") pod "ingress-canary-9wh5p" (UID: "fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65") : secret "canary-serving-cert" not found Apr 24 14:24:17.710715 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:17.710669 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8938f71-608b-4cd4-ae9c-3fee7fdcb899-metrics-tls podName:c8938f71-608b-4cd4-ae9c-3fee7fdcb899 nodeName:}" failed. No retries permitted until 2026-04-24 14:24:21.710650613 +0000 UTC m=+39.836054412 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c8938f71-608b-4cd4-ae9c-3fee7fdcb899-metrics-tls") pod "dns-default-t8gtr" (UID: "c8938f71-608b-4cd4-ae9c-3fee7fdcb899") : secret "dns-default-metrics-tls" not found Apr 24 14:24:21.644057 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:21.644021 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-registry-tls\") pod \"image-registry-fcd55b66b-r86xn\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:24:21.644422 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:21.644213 2570 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 24 14:24:21.644422 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:21.644234 2570 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-fcd55b66b-r86xn: secret "image-registry-tls" not found Apr 24 14:24:21.644422 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:21.644298 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-registry-tls podName:0123d5b7-b24f-4266-a1e2-30653ee3b093 nodeName:}" failed. No retries permitted until 2026-04-24 14:24:29.644278365 +0000 UTC m=+47.769682161 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-registry-tls") pod "image-registry-fcd55b66b-r86xn" (UID: "0123d5b7-b24f-4266-a1e2-30653ee3b093") : secret "image-registry-tls" not found Apr 24 14:24:21.745139 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:21.745089 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c8938f71-608b-4cd4-ae9c-3fee7fdcb899-metrics-tls\") pod \"dns-default-t8gtr\" (UID: \"c8938f71-608b-4cd4-ae9c-3fee7fdcb899\") " pod="openshift-dns/dns-default-t8gtr" Apr 24 14:24:21.745283 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:21.745171 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65-cert\") pod \"ingress-canary-9wh5p\" (UID: \"fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65\") " pod="openshift-ingress-canary/ingress-canary-9wh5p" Apr 24 14:24:21.745283 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:21.745213 2570 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 24 14:24:21.745283 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:21.745274 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8938f71-608b-4cd4-ae9c-3fee7fdcb899-metrics-tls podName:c8938f71-608b-4cd4-ae9c-3fee7fdcb899 nodeName:}" failed. No retries permitted until 2026-04-24 14:24:29.745259541 +0000 UTC m=+47.870663337 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c8938f71-608b-4cd4-ae9c-3fee7fdcb899-metrics-tls") pod "dns-default-t8gtr" (UID: "c8938f71-608b-4cd4-ae9c-3fee7fdcb899") : secret "dns-default-metrics-tls" not found Apr 24 14:24:21.745283 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:21.745276 2570 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 24 14:24:21.745415 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:21.745312 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65-cert podName:fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65 nodeName:}" failed. No retries permitted until 2026-04-24 14:24:29.745301611 +0000 UTC m=+47.870705405 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65-cert") pod "ingress-canary-9wh5p" (UID: "fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65") : secret "canary-serving-cert" not found Apr 24 14:24:21.980287 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:21.980259 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-7b299"] Apr 24 14:24:21.983922 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:24:21.983897 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb704828_9d72_448f_8256_1dda6f6273ea.slice/crio-39814b7aebf730fb544f079704f91013e1d91248fe57bde95b0fadf60a516e14 WatchSource:0}: Error finding container 39814b7aebf730fb544f079704f91013e1d91248fe57bde95b0fadf60a516e14: Status 404 returned error can't find the container with id 39814b7aebf730fb544f079704f91013e1d91248fe57bde95b0fadf60a516e14 Apr 24 14:24:22.645511 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:22.645419 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-c8d9cff7c-7mz9c" event={"ID":"f618f048-77b0-41f7-a1c7-f3a7816c9456","Type":"ContainerStarted","Data":"85bcef2c9187c443a04901153334d674ba45fd09ef1ed7612d7873535744ae2a"} Apr 24 14:24:22.649257 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:22.649225 2570 generic.go:358] "Generic (PLEG): container finished" podID="0d6bd978-a62b-4e69-9786-a9b7774d09db" containerID="2e1606f61ffaf810a26b1b29c424cca021e467d4d58a3af3de153e807738ac8e" exitCode=0 Apr 24 14:24:22.649395 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:22.649298 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jvbnt" event={"ID":"0d6bd978-a62b-4e69-9786-a9b7774d09db","Type":"ContainerDied","Data":"2e1606f61ffaf810a26b1b29c424cca021e467d4d58a3af3de153e807738ac8e"} Apr 24 14:24:22.651371 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:22.651334 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-7495cb5474-jpfv4" event={"ID":"cdce2a9f-fb35-4e60-b7d0-3616323b4f4a","Type":"ContainerStarted","Data":"46738df3b652c2f174e29917f569eb1603f13fd92fd8f18042df25a4812134d1"} Apr 24 14:24:22.651623 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:22.651600 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-7495cb5474-jpfv4" Apr 24 14:24:22.653396 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:22.653260 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-54748d8fcc-2q79c" event={"ID":"d644d2fe-b7f5-48f0-9538-6e02bc1e7203","Type":"ContainerStarted","Data":"f17bbbad9a343e66523b493efd55082bd5225d5ec92e493495c1b79dc51ad6f0"} Apr 24 14:24:22.653505 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:22.653431 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-7495cb5474-jpfv4" Apr 24 14:24:22.654732 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:22.654695 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-7b299" event={"ID":"cb704828-9d72-448f-8256-1dda6f6273ea","Type":"ContainerStarted","Data":"39814b7aebf730fb544f079704f91013e1d91248fe57bde95b0fadf60a516e14"} Apr 24 14:24:22.686387 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:22.686337 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-54748d8fcc-2q79c" podStartSLOduration=16.15491055 podStartE2EDuration="23.686319602s" podCreationTimestamp="2026-04-24 14:23:59 +0000 UTC" firstStartedPulling="2026-04-24 14:24:14.337046903 +0000 UTC m=+32.462450707" lastFinishedPulling="2026-04-24 14:24:21.868455966 +0000 UTC m=+39.993859759" observedRunningTime="2026-04-24 14:24:22.685865412 +0000 UTC m=+40.811269224" watchObservedRunningTime="2026-04-24 14:24:22.686319602 +0000 UTC m=+40.811723420" Apr 24 14:24:23.660848 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:23.660808 2570 generic.go:358] "Generic (PLEG): container finished" podID="0d6bd978-a62b-4e69-9786-a9b7774d09db" containerID="2c68b153ec212943a70e4b67a60a70b010f351e14c52bad1b45a8b4aa24ebc3a" exitCode=0 Apr 24 14:24:23.661320 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:23.660876 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jvbnt" event={"ID":"0d6bd978-a62b-4e69-9786-a9b7774d09db","Type":"ContainerDied","Data":"2c68b153ec212943a70e4b67a60a70b010f351e14c52bad1b45a8b4aa24ebc3a"} Apr 24 14:24:23.684742 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:23.684613 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-7495cb5474-jpfv4" podStartSLOduration=17.202694874 podStartE2EDuration="24.684595153s" podCreationTimestamp="2026-04-24 14:23:59 +0000 UTC" firstStartedPulling="2026-04-24 14:24:14.386557484 +0000 UTC m=+32.511961292" lastFinishedPulling="2026-04-24 14:24:21.868457764 +0000 UTC m=+39.993861571" observedRunningTime="2026-04-24 14:24:22.704444808 +0000 UTC m=+40.829848624" watchObservedRunningTime="2026-04-24 14:24:23.684595153 +0000 UTC m=+41.809998972" Apr 24 14:24:24.369317 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:24.369228 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b-original-pull-secret\") pod \"global-pull-secret-syncer-wz4gq\" (UID: \"0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b\") " pod="kube-system/global-pull-secret-syncer-wz4gq" Apr 24 14:24:24.372978 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:24.372940 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b-original-pull-secret\") pod \"global-pull-secret-syncer-wz4gq\" (UID: \"0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b\") " pod="kube-system/global-pull-secret-syncer-wz4gq" Apr 24 14:24:24.649590 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:24.649511 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-wz4gq" Apr 24 14:24:25.600544 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:25.600507 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-wz4gq"] Apr 24 14:24:25.604998 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:24:25.604974 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b4d949f_8da9_4af5_9fe4_ee71f6d2d56b.slice/crio-4d27eb7de68676f9e1e4abd2ed177b246b9f3f6aeaba7d1f6128007aea26b396 WatchSource:0}: Error finding container 4d27eb7de68676f9e1e4abd2ed177b246b9f3f6aeaba7d1f6128007aea26b396: Status 404 returned error can't find the container with id 4d27eb7de68676f9e1e4abd2ed177b246b9f3f6aeaba7d1f6128007aea26b396 Apr 24 14:24:25.666277 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:25.666247 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-wz4gq" event={"ID":"0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b","Type":"ContainerStarted","Data":"4d27eb7de68676f9e1e4abd2ed177b246b9f3f6aeaba7d1f6128007aea26b396"} Apr 24 14:24:25.667851 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:25.667816 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-c8d9cff7c-7mz9c" event={"ID":"f618f048-77b0-41f7-a1c7-f3a7816c9456","Type":"ContainerStarted","Data":"d8da1b4335df3155b860f61b62603696299492751e2d7a5fee860e9950e44e7d"} Apr 24 14:24:25.670720 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:25.670697 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jvbnt" event={"ID":"0d6bd978-a62b-4e69-9786-a9b7774d09db","Type":"ContainerStarted","Data":"e0a00ac410d9339e84f7a24e7add09c15aec528597765436c396bd2161b56739"} Apr 24 14:24:25.671852 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:25.671832 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-7b299" event={"ID":"cb704828-9d72-448f-8256-1dda6f6273ea","Type":"ContainerStarted","Data":"d29606b42139847d28a69cf8738a5fc53a8427f61f09372cc934eac0e3d50d21"} Apr 24 14:24:25.671982 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:25.671950 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-7b299" Apr 24 14:24:25.696779 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:25.696724 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-jvbnt" podStartSLOduration=5.52546577 podStartE2EDuration="43.6967118s" podCreationTimestamp="2026-04-24 14:23:42 +0000 UTC" firstStartedPulling="2026-04-24 14:23:43.696283477 +0000 UTC m=+1.821687271" lastFinishedPulling="2026-04-24 14:24:21.8675295 +0000 UTC m=+39.992933301" observedRunningTime="2026-04-24 14:24:25.695402025 +0000 UTC m=+43.820805840" watchObservedRunningTime="2026-04-24 14:24:25.6967118 +0000 UTC m=+43.822115617" Apr 24 14:24:25.710742 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:25.710702 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-target-7b299" podStartSLOduration=40.202555572 podStartE2EDuration="43.710681024s" podCreationTimestamp="2026-04-24 14:23:42 +0000 UTC" firstStartedPulling="2026-04-24 14:24:21.985686385 +0000 UTC m=+40.111090178" lastFinishedPulling="2026-04-24 14:24:25.493811835 +0000 UTC m=+43.619215630" observedRunningTime="2026-04-24 14:24:25.710046375 +0000 UTC m=+43.835450190" watchObservedRunningTime="2026-04-24 14:24:25.710681024 +0000 UTC m=+43.836084839" Apr 24 14:24:26.678751 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:26.678361 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-c8d9cff7c-7mz9c" event={"ID":"f618f048-77b0-41f7-a1c7-f3a7816c9456","Type":"ContainerStarted","Data":"b070e439062c381e6830aa739e6377627798d983afa53bafd71bd6d05095886e"} Apr 24 14:24:26.696715 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:26.696658 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-c8d9cff7c-7mz9c" podStartSLOduration=16.556111634 podStartE2EDuration="27.696644537s" podCreationTimestamp="2026-04-24 14:23:59 +0000 UTC" firstStartedPulling="2026-04-24 14:24:14.339634494 +0000 UTC m=+32.465038291" lastFinishedPulling="2026-04-24 14:24:25.480167384 +0000 UTC m=+43.605571194" observedRunningTime="2026-04-24 14:24:26.695960606 +0000 UTC m=+44.821364424" watchObservedRunningTime="2026-04-24 14:24:26.696644537 +0000 UTC m=+44.822048351" Apr 24 14:24:29.712415 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:29.712357 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-registry-tls\") pod \"image-registry-fcd55b66b-r86xn\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:24:29.712983 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:29.712510 2570 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 24 14:24:29.712983 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:29.712536 2570 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-fcd55b66b-r86xn: secret "image-registry-tls" not found Apr 24 14:24:29.712983 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:29.712608 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-registry-tls podName:0123d5b7-b24f-4266-a1e2-30653ee3b093 nodeName:}" failed. No retries permitted until 2026-04-24 14:24:45.712587767 +0000 UTC m=+63.837991560 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-registry-tls") pod "image-registry-fcd55b66b-r86xn" (UID: "0123d5b7-b24f-4266-a1e2-30653ee3b093") : secret "image-registry-tls" not found Apr 24 14:24:29.812931 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:29.812889 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c8938f71-608b-4cd4-ae9c-3fee7fdcb899-metrics-tls\") pod \"dns-default-t8gtr\" (UID: \"c8938f71-608b-4cd4-ae9c-3fee7fdcb899\") " pod="openshift-dns/dns-default-t8gtr" Apr 24 14:24:29.813133 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:29.812952 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65-cert\") pod \"ingress-canary-9wh5p\" (UID: \"fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65\") " pod="openshift-ingress-canary/ingress-canary-9wh5p" Apr 24 14:24:29.813133 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:29.813040 2570 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 24 14:24:29.813231 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:29.813148 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8938f71-608b-4cd4-ae9c-3fee7fdcb899-metrics-tls podName:c8938f71-608b-4cd4-ae9c-3fee7fdcb899 nodeName:}" failed. No retries permitted until 2026-04-24 14:24:45.813123918 +0000 UTC m=+63.938527731 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c8938f71-608b-4cd4-ae9c-3fee7fdcb899-metrics-tls") pod "dns-default-t8gtr" (UID: "c8938f71-608b-4cd4-ae9c-3fee7fdcb899") : secret "dns-default-metrics-tls" not found Apr 24 14:24:29.813231 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:29.813051 2570 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 24 14:24:29.813305 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:29.813239 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65-cert podName:fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65 nodeName:}" failed. No retries permitted until 2026-04-24 14:24:45.813219204 +0000 UTC m=+63.938623016 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65-cert") pod "ingress-canary-9wh5p" (UID: "fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65") : secret "canary-serving-cert" not found Apr 24 14:24:31.693940 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:31.693901 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-wz4gq" event={"ID":"0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b","Type":"ContainerStarted","Data":"fe22184f8815f58e4640a7293c87793dc9333a354aae0acd2ce7e7e2ed1ab780"} Apr 24 14:24:31.710888 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:31.710839 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/global-pull-secret-syncer-wz4gq" podStartSLOduration=34.616510205 podStartE2EDuration="39.71082626s" podCreationTimestamp="2026-04-24 14:23:52 +0000 UTC" firstStartedPulling="2026-04-24 14:24:25.607051335 +0000 UTC m=+43.732455128" lastFinishedPulling="2026-04-24 14:24:30.70136739 +0000 UTC m=+48.826771183" observedRunningTime="2026-04-24 14:24:31.710363528 +0000 UTC m=+49.835767342" watchObservedRunningTime="2026-04-24 14:24:31.71082626 +0000 UTC m=+49.836230075" Apr 24 14:24:45.719313 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:45.719255 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-registry-tls\") pod \"image-registry-fcd55b66b-r86xn\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:24:45.719717 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:45.719402 2570 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 24 14:24:45.719717 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:45.719423 2570 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-fcd55b66b-r86xn: secret "image-registry-tls" not found Apr 24 14:24:45.719717 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:45.719484 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-registry-tls podName:0123d5b7-b24f-4266-a1e2-30653ee3b093 nodeName:}" failed. No retries permitted until 2026-04-24 14:25:17.719469556 +0000 UTC m=+95.844873350 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-registry-tls") pod "image-registry-fcd55b66b-r86xn" (UID: "0123d5b7-b24f-4266-a1e2-30653ee3b093") : secret "image-registry-tls" not found Apr 24 14:24:45.819961 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:45.819914 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c8938f71-608b-4cd4-ae9c-3fee7fdcb899-metrics-tls\") pod \"dns-default-t8gtr\" (UID: \"c8938f71-608b-4cd4-ae9c-3fee7fdcb899\") " pod="openshift-dns/dns-default-t8gtr" Apr 24 14:24:45.819961 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:45.819968 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65-cert\") pod \"ingress-canary-9wh5p\" (UID: \"fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65\") " pod="openshift-ingress-canary/ingress-canary-9wh5p" Apr 24 14:24:45.820188 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:45.820053 2570 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 24 14:24:45.820188 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:45.820060 2570 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 24 14:24:45.820188 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:45.820133 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65-cert podName:fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65 nodeName:}" failed. No retries permitted until 2026-04-24 14:25:17.820098247 +0000 UTC m=+95.945502040 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65-cert") pod "ingress-canary-9wh5p" (UID: "fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65") : secret "canary-serving-cert" not found Apr 24 14:24:45.820188 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:45.820146 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8938f71-608b-4cd4-ae9c-3fee7fdcb899-metrics-tls podName:c8938f71-608b-4cd4-ae9c-3fee7fdcb899 nodeName:}" failed. No retries permitted until 2026-04-24 14:25:17.820140024 +0000 UTC m=+95.945543817 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c8938f71-608b-4cd4-ae9c-3fee7fdcb899-metrics-tls") pod "dns-default-t8gtr" (UID: "c8938f71-608b-4cd4-ae9c-3fee7fdcb899") : secret "dns-default-metrics-tls" not found Apr 24 14:24:47.128930 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:47.128879 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/62277dce-4b78-4158-9951-1292c0fa443c-metrics-certs\") pod \"network-metrics-daemon-ld8rd\" (UID: \"62277dce-4b78-4158-9951-1292c0fa443c\") " pod="openshift-multus/network-metrics-daemon-ld8rd" Apr 24 14:24:47.129346 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:47.129007 2570 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Apr 24 14:24:47.129346 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:24:47.129061 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62277dce-4b78-4158-9951-1292c0fa443c-metrics-certs podName:62277dce-4b78-4158-9951-1292c0fa443c nodeName:}" failed. No retries permitted until 2026-04-24 14:25:51.129046396 +0000 UTC m=+129.254450189 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/62277dce-4b78-4158-9951-1292c0fa443c-metrics-certs") pod "network-metrics-daemon-ld8rd" (UID: "62277dce-4b78-4158-9951-1292c0fa443c") : secret "metrics-daemon-secret" not found Apr 24 14:24:56.681117 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:24:56.680993 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-7b299" Apr 24 14:25:17.751876 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:25:17.751838 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-registry-tls\") pod \"image-registry-fcd55b66b-r86xn\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:25:17.752318 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:25:17.751975 2570 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 24 14:25:17.752318 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:25:17.751992 2570 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-fcd55b66b-r86xn: secret "image-registry-tls" not found Apr 24 14:25:17.752318 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:25:17.752056 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-registry-tls podName:0123d5b7-b24f-4266-a1e2-30653ee3b093 nodeName:}" failed. No retries permitted until 2026-04-24 14:26:21.752040096 +0000 UTC m=+159.877443902 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-registry-tls") pod "image-registry-fcd55b66b-r86xn" (UID: "0123d5b7-b24f-4266-a1e2-30653ee3b093") : secret "image-registry-tls" not found Apr 24 14:25:17.852284 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:25:17.852252 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c8938f71-608b-4cd4-ae9c-3fee7fdcb899-metrics-tls\") pod \"dns-default-t8gtr\" (UID: \"c8938f71-608b-4cd4-ae9c-3fee7fdcb899\") " pod="openshift-dns/dns-default-t8gtr" Apr 24 14:25:17.852435 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:25:17.852295 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65-cert\") pod \"ingress-canary-9wh5p\" (UID: \"fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65\") " pod="openshift-ingress-canary/ingress-canary-9wh5p" Apr 24 14:25:17.852435 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:25:17.852387 2570 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 24 14:25:17.852435 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:25:17.852393 2570 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 24 14:25:17.852435 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:25:17.852434 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65-cert podName:fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65 nodeName:}" failed. No retries permitted until 2026-04-24 14:26:21.852421497 +0000 UTC m=+159.977825289 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65-cert") pod "ingress-canary-9wh5p" (UID: "fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65") : secret "canary-serving-cert" not found Apr 24 14:25:17.852568 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:25:17.852448 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8938f71-608b-4cd4-ae9c-3fee7fdcb899-metrics-tls podName:c8938f71-608b-4cd4-ae9c-3fee7fdcb899 nodeName:}" failed. No retries permitted until 2026-04-24 14:26:21.852441846 +0000 UTC m=+159.977845639 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c8938f71-608b-4cd4-ae9c-3fee7fdcb899-metrics-tls") pod "dns-default-t8gtr" (UID: "c8938f71-608b-4cd4-ae9c-3fee7fdcb899") : secret "dns-default-metrics-tls" not found Apr 24 14:25:51.199212 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:25:51.199176 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/62277dce-4b78-4158-9951-1292c0fa443c-metrics-certs\") pod \"network-metrics-daemon-ld8rd\" (UID: \"62277dce-4b78-4158-9951-1292c0fa443c\") " pod="openshift-multus/network-metrics-daemon-ld8rd" Apr 24 14:25:51.199677 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:25:51.199287 2570 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Apr 24 14:25:51.199677 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:25:51.199351 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62277dce-4b78-4158-9951-1292c0fa443c-metrics-certs podName:62277dce-4b78-4158-9951-1292c0fa443c nodeName:}" failed. No retries permitted until 2026-04-24 14:27:53.199337248 +0000 UTC m=+251.324741041 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/62277dce-4b78-4158-9951-1292c0fa443c-metrics-certs") pod "network-metrics-daemon-ld8rd" (UID: "62277dce-4b78-4158-9951-1292c0fa443c") : secret "metrics-daemon-secret" not found Apr 24 14:25:54.358276 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:25:54.358246 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-cmhmr_7966ddd8-be1a-45a0-8020-2cd96b2fd595/dns-node-resolver/0.log" Apr 24 14:25:55.358630 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:25:55.358603 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-k7xj4_2e8d53e8-f5d3-4863-b7e1-8141078a84b3/node-ca/0.log" Apr 24 14:26:16.842399 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:26:16.842358 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[registry-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" podUID="0123d5b7-b24f-4266-a1e2-30653ee3b093" Apr 24 14:26:16.933717 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:26:16.933675 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-ingress-canary/ingress-canary-9wh5p" podUID="fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65" Apr 24 14:26:16.946455 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:16.946432 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9wh5p" Apr 24 14:26:16.946600 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:16.946474 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:26:16.950022 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:26:16.950000 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[metrics-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-dns/dns-default-t8gtr" podUID="c8938f71-608b-4cd4-ae9c-3fee7fdcb899" Apr 24 14:26:17.158633 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:17.158557 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-runtime-extractor-d5lpl"] Apr 24 14:26:17.160577 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:17.160561 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-d5lpl" Apr 24 14:26:17.163836 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:17.163806 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-sa-dockercfg-jk68x\"" Apr 24 14:26:17.163987 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:17.163897 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"openshift-service-ca.crt\"" Apr 24 14:26:17.163987 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:17.163910 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-rbac-proxy\"" Apr 24 14:26:17.163987 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:17.163898 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-tls\"" Apr 24 14:26:17.164266 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:17.164251 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-root-ca.crt\"" Apr 24 14:26:17.173073 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:17.173047 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-d5lpl"] Apr 24 14:26:17.189997 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:17.189972 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhmjf\" (UniqueName: \"kubernetes.io/projected/8fce4c6c-54ba-47e1-969d-6a3156568317-kube-api-access-rhmjf\") pod \"insights-runtime-extractor-d5lpl\" (UID: \"8fce4c6c-54ba-47e1-969d-6a3156568317\") " pod="openshift-insights/insights-runtime-extractor-d5lpl" Apr 24 14:26:17.190143 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:17.190014 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/8fce4c6c-54ba-47e1-969d-6a3156568317-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-d5lpl\" (UID: \"8fce4c6c-54ba-47e1-969d-6a3156568317\") " pod="openshift-insights/insights-runtime-extractor-d5lpl" Apr 24 14:26:17.190143 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:17.190035 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/8fce4c6c-54ba-47e1-969d-6a3156568317-data-volume\") pod \"insights-runtime-extractor-d5lpl\" (UID: \"8fce4c6c-54ba-47e1-969d-6a3156568317\") " pod="openshift-insights/insights-runtime-extractor-d5lpl" Apr 24 14:26:17.190143 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:17.190077 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/8fce4c6c-54ba-47e1-969d-6a3156568317-crio-socket\") pod \"insights-runtime-extractor-d5lpl\" (UID: \"8fce4c6c-54ba-47e1-969d-6a3156568317\") " pod="openshift-insights/insights-runtime-extractor-d5lpl" Apr 24 14:26:17.190262 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:17.190170 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/8fce4c6c-54ba-47e1-969d-6a3156568317-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-d5lpl\" (UID: \"8fce4c6c-54ba-47e1-969d-6a3156568317\") " pod="openshift-insights/insights-runtime-extractor-d5lpl" Apr 24 14:26:17.291379 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:17.291343 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/8fce4c6c-54ba-47e1-969d-6a3156568317-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-d5lpl\" (UID: \"8fce4c6c-54ba-47e1-969d-6a3156568317\") " pod="openshift-insights/insights-runtime-extractor-d5lpl" Apr 24 14:26:17.291379 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:17.291383 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rhmjf\" (UniqueName: \"kubernetes.io/projected/8fce4c6c-54ba-47e1-969d-6a3156568317-kube-api-access-rhmjf\") pod \"insights-runtime-extractor-d5lpl\" (UID: \"8fce4c6c-54ba-47e1-969d-6a3156568317\") " pod="openshift-insights/insights-runtime-extractor-d5lpl" Apr 24 14:26:17.291586 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:17.291422 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/8fce4c6c-54ba-47e1-969d-6a3156568317-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-d5lpl\" (UID: \"8fce4c6c-54ba-47e1-969d-6a3156568317\") " pod="openshift-insights/insights-runtime-extractor-d5lpl" Apr 24 14:26:17.291586 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:17.291441 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/8fce4c6c-54ba-47e1-969d-6a3156568317-data-volume\") pod \"insights-runtime-extractor-d5lpl\" (UID: \"8fce4c6c-54ba-47e1-969d-6a3156568317\") " pod="openshift-insights/insights-runtime-extractor-d5lpl" Apr 24 14:26:17.291586 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:17.291468 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/8fce4c6c-54ba-47e1-969d-6a3156568317-crio-socket\") pod \"insights-runtime-extractor-d5lpl\" (UID: \"8fce4c6c-54ba-47e1-969d-6a3156568317\") " pod="openshift-insights/insights-runtime-extractor-d5lpl" Apr 24 14:26:17.291730 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:17.291608 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/8fce4c6c-54ba-47e1-969d-6a3156568317-crio-socket\") pod \"insights-runtime-extractor-d5lpl\" (UID: \"8fce4c6c-54ba-47e1-969d-6a3156568317\") " pod="openshift-insights/insights-runtime-extractor-d5lpl" Apr 24 14:26:17.291846 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:17.291824 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/8fce4c6c-54ba-47e1-969d-6a3156568317-data-volume\") pod \"insights-runtime-extractor-d5lpl\" (UID: \"8fce4c6c-54ba-47e1-969d-6a3156568317\") " pod="openshift-insights/insights-runtime-extractor-d5lpl" Apr 24 14:26:17.292035 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:17.292019 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/8fce4c6c-54ba-47e1-969d-6a3156568317-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-d5lpl\" (UID: \"8fce4c6c-54ba-47e1-969d-6a3156568317\") " pod="openshift-insights/insights-runtime-extractor-d5lpl" Apr 24 14:26:17.293628 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:17.293611 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/8fce4c6c-54ba-47e1-969d-6a3156568317-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-d5lpl\" (UID: \"8fce4c6c-54ba-47e1-969d-6a3156568317\") " pod="openshift-insights/insights-runtime-extractor-d5lpl" Apr 24 14:26:17.304407 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:17.304383 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhmjf\" (UniqueName: \"kubernetes.io/projected/8fce4c6c-54ba-47e1-969d-6a3156568317-kube-api-access-rhmjf\") pod \"insights-runtime-extractor-d5lpl\" (UID: \"8fce4c6c-54ba-47e1-969d-6a3156568317\") " pod="openshift-insights/insights-runtime-extractor-d5lpl" Apr 24 14:26:17.437798 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:26:17.437767 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[metrics-certs], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-multus/network-metrics-daemon-ld8rd" podUID="62277dce-4b78-4158-9951-1292c0fa443c" Apr 24 14:26:17.468915 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:17.468897 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-d5lpl" Apr 24 14:26:17.583888 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:17.583856 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-d5lpl"] Apr 24 14:26:17.588208 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:26:17.588176 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fce4c6c_54ba_47e1_969d_6a3156568317.slice/crio-c5b53eb72f1410f2c0663d060b54312c00df875b6f6520ab6e6747109d9c19a4 WatchSource:0}: Error finding container c5b53eb72f1410f2c0663d060b54312c00df875b6f6520ab6e6747109d9c19a4: Status 404 returned error can't find the container with id c5b53eb72f1410f2c0663d060b54312c00df875b6f6520ab6e6747109d9c19a4 Apr 24 14:26:17.950998 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:17.950964 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-d5lpl" event={"ID":"8fce4c6c-54ba-47e1-969d-6a3156568317","Type":"ContainerStarted","Data":"a4a28ba34f604b685c5844a956f00cf0e1a2819b14335a03ed575967f858d8b2"} Apr 24 14:26:17.950998 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:17.951000 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-d5lpl" event={"ID":"8fce4c6c-54ba-47e1-969d-6a3156568317","Type":"ContainerStarted","Data":"c5b53eb72f1410f2c0663d060b54312c00df875b6f6520ab6e6747109d9c19a4"} Apr 24 14:26:18.955075 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:18.955035 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-d5lpl" event={"ID":"8fce4c6c-54ba-47e1-969d-6a3156568317","Type":"ContainerStarted","Data":"4534ab2b8b0bec6565bf90cbaa3f31a620b36168cfa368619a86d938f0edaeef"} Apr 24 14:26:19.959282 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:19.959244 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-d5lpl" event={"ID":"8fce4c6c-54ba-47e1-969d-6a3156568317","Type":"ContainerStarted","Data":"e1d2c204b539421eb69b26e9bda84287a53835495e0cb365a79aa020af773b39"} Apr 24 14:26:19.981380 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:19.981315 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-runtime-extractor-d5lpl" podStartSLOduration=0.860302086 podStartE2EDuration="2.98130315s" podCreationTimestamp="2026-04-24 14:26:17 +0000 UTC" firstStartedPulling="2026-04-24 14:26:17.637215205 +0000 UTC m=+155.762618997" lastFinishedPulling="2026-04-24 14:26:19.758216255 +0000 UTC m=+157.883620061" observedRunningTime="2026-04-24 14:26:19.979869712 +0000 UTC m=+158.105273530" watchObservedRunningTime="2026-04-24 14:26:19.98130315 +0000 UTC m=+158.106706963" Apr 24 14:26:21.829089 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:21.829052 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-registry-tls\") pod \"image-registry-fcd55b66b-r86xn\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:26:21.831397 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:21.831372 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-registry-tls\") pod \"image-registry-fcd55b66b-r86xn\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:26:21.929637 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:21.929585 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c8938f71-608b-4cd4-ae9c-3fee7fdcb899-metrics-tls\") pod \"dns-default-t8gtr\" (UID: \"c8938f71-608b-4cd4-ae9c-3fee7fdcb899\") " pod="openshift-dns/dns-default-t8gtr" Apr 24 14:26:21.929637 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:21.929645 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65-cert\") pod \"ingress-canary-9wh5p\" (UID: \"fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65\") " pod="openshift-ingress-canary/ingress-canary-9wh5p" Apr 24 14:26:21.932007 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:21.931986 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c8938f71-608b-4cd4-ae9c-3fee7fdcb899-metrics-tls\") pod \"dns-default-t8gtr\" (UID: \"c8938f71-608b-4cd4-ae9c-3fee7fdcb899\") " pod="openshift-dns/dns-default-t8gtr" Apr 24 14:26:21.932130 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:21.932119 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65-cert\") pod \"ingress-canary-9wh5p\" (UID: \"fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65\") " pod="openshift-ingress-canary/ingress-canary-9wh5p" Apr 24 14:26:22.051606 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:22.051576 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-84zkd\"" Apr 24 14:26:22.051606 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:22.051576 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-xtmb2\"" Apr 24 14:26:22.057910 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:22.057891 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9wh5p" Apr 24 14:26:22.057974 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:22.057910 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:26:22.182877 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:22.182840 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-fcd55b66b-r86xn"] Apr 24 14:26:22.185976 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:26:22.185948 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0123d5b7_b24f_4266_a1e2_30653ee3b093.slice/crio-0f86d7c82e8af997ab580d60e1ebdf583984293888d6ddb84ba325612b13b710 WatchSource:0}: Error finding container 0f86d7c82e8af997ab580d60e1ebdf583984293888d6ddb84ba325612b13b710: Status 404 returned error can't find the container with id 0f86d7c82e8af997ab580d60e1ebdf583984293888d6ddb84ba325612b13b710 Apr 24 14:26:22.193419 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:22.193393 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9wh5p"] Apr 24 14:26:22.196329 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:26:22.196303 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfecc5f1a_59b3_4f57_9f5b_3d6977ac5a65.slice/crio-1a61b43dd998929a402fd4342972464639940167c5e72da66303cb01174e18f8 WatchSource:0}: Error finding container 1a61b43dd998929a402fd4342972464639940167c5e72da66303cb01174e18f8: Status 404 returned error can't find the container with id 1a61b43dd998929a402fd4342972464639940167c5e72da66303cb01174e18f8 Apr 24 14:26:22.651878 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:22.651812 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-7495cb5474-jpfv4" podUID="cdce2a9f-fb35-4e60-b7d0-3616323b4f4a" containerName="acm-agent" probeResult="failure" output="Get \"http://10.132.0.8:8000/readyz\": dial tcp 10.132.0.8:8000: connect: connection refused" Apr 24 14:26:22.967783 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:22.967743 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9wh5p" event={"ID":"fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65","Type":"ContainerStarted","Data":"1a61b43dd998929a402fd4342972464639940167c5e72da66303cb01174e18f8"} Apr 24 14:26:22.969069 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:22.969039 2570 generic.go:358] "Generic (PLEG): container finished" podID="cdce2a9f-fb35-4e60-b7d0-3616323b4f4a" containerID="46738df3b652c2f174e29917f569eb1603f13fd92fd8f18042df25a4812134d1" exitCode=1 Apr 24 14:26:22.969199 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:22.969131 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-7495cb5474-jpfv4" event={"ID":"cdce2a9f-fb35-4e60-b7d0-3616323b4f4a","Type":"ContainerDied","Data":"46738df3b652c2f174e29917f569eb1603f13fd92fd8f18042df25a4812134d1"} Apr 24 14:26:22.969557 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:22.969528 2570 scope.go:117] "RemoveContainer" containerID="46738df3b652c2f174e29917f569eb1603f13fd92fd8f18042df25a4812134d1" Apr 24 14:26:22.970526 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:22.970499 2570 generic.go:358] "Generic (PLEG): container finished" podID="d644d2fe-b7f5-48f0-9538-6e02bc1e7203" containerID="f17bbbad9a343e66523b493efd55082bd5225d5ec92e493495c1b79dc51ad6f0" exitCode=255 Apr 24 14:26:22.970599 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:22.970571 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-54748d8fcc-2q79c" event={"ID":"d644d2fe-b7f5-48f0-9538-6e02bc1e7203","Type":"ContainerDied","Data":"f17bbbad9a343e66523b493efd55082bd5225d5ec92e493495c1b79dc51ad6f0"} Apr 24 14:26:22.970989 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:22.970925 2570 scope.go:117] "RemoveContainer" containerID="f17bbbad9a343e66523b493efd55082bd5225d5ec92e493495c1b79dc51ad6f0" Apr 24 14:26:22.972460 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:22.972415 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" event={"ID":"0123d5b7-b24f-4266-a1e2-30653ee3b093","Type":"ContainerStarted","Data":"916e0bf1281e497d379aef57eeec6cbcc79fd384381662bd7c3cdf87a632bffb"} Apr 24 14:26:22.972460 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:22.972450 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" event={"ID":"0123d5b7-b24f-4266-a1e2-30653ee3b093","Type":"ContainerStarted","Data":"0f86d7c82e8af997ab580d60e1ebdf583984293888d6ddb84ba325612b13b710"} Apr 24 14:26:22.972610 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:22.972594 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:26:23.043556 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:23.043494 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" podStartSLOduration=161.043473949 podStartE2EDuration="2m41.043473949s" podCreationTimestamp="2026-04-24 14:23:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 14:26:23.042727945 +0000 UTC m=+161.168131961" watchObservedRunningTime="2026-04-24 14:26:23.043473949 +0000 UTC m=+161.168877764" Apr 24 14:26:23.976356 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:23.976322 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-7495cb5474-jpfv4" event={"ID":"cdce2a9f-fb35-4e60-b7d0-3616323b4f4a","Type":"ContainerStarted","Data":"dd54eb51fc12fa2fe66d394a1481ee036deaafe2f6b0ca3aa6620152103ccb14"} Apr 24 14:26:23.976712 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:23.976672 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-7495cb5474-jpfv4" Apr 24 14:26:23.977393 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:23.977375 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-7495cb5474-jpfv4" Apr 24 14:26:23.978131 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:23.978090 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-54748d8fcc-2q79c" event={"ID":"d644d2fe-b7f5-48f0-9538-6e02bc1e7203","Type":"ContainerStarted","Data":"450de528292f3ef25ad92a61af854e3e0259494b0929275c16b0d27a4317ffff"} Apr 24 14:26:23.979469 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:23.979435 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9wh5p" event={"ID":"fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65","Type":"ContainerStarted","Data":"3b22cb1f262436079fd6d73c4941219d9513f633a114349c6a45cd98ad6048d3"} Apr 24 14:26:24.041073 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:24.040956 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-9wh5p" podStartSLOduration=129.337212753 podStartE2EDuration="2m11.040938408s" podCreationTimestamp="2026-04-24 14:24:13 +0000 UTC" firstStartedPulling="2026-04-24 14:26:22.200341225 +0000 UTC m=+160.325745021" lastFinishedPulling="2026-04-24 14:26:23.904066884 +0000 UTC m=+162.029470676" observedRunningTime="2026-04-24 14:26:24.040424449 +0000 UTC m=+162.165828263" watchObservedRunningTime="2026-04-24 14:26:24.040938408 +0000 UTC m=+162.166342221" Apr 24 14:26:28.424459 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:28.424370 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-t8gtr" Apr 24 14:26:28.427768 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:28.427745 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-m8ml5\"" Apr 24 14:26:28.435070 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:28.435050 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-t8gtr" Apr 24 14:26:28.565977 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:28.565946 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-t8gtr"] Apr 24 14:26:28.569428 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:26:28.569400 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8938f71_608b_4cd4_ae9c_3fee7fdcb899.slice/crio-b7a7bf875c1dc6c4cfd2450d87f4f15b58b49d1aaac629cae94be15a08d624d1 WatchSource:0}: Error finding container b7a7bf875c1dc6c4cfd2450d87f4f15b58b49d1aaac629cae94be15a08d624d1: Status 404 returned error can't find the container with id b7a7bf875c1dc6c4cfd2450d87f4f15b58b49d1aaac629cae94be15a08d624d1 Apr 24 14:26:28.993033 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:28.992997 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-t8gtr" event={"ID":"c8938f71-608b-4cd4-ae9c-3fee7fdcb899","Type":"ContainerStarted","Data":"b7a7bf875c1dc6c4cfd2450d87f4f15b58b49d1aaac629cae94be15a08d624d1"} Apr 24 14:26:29.997934 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:29.997899 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-t8gtr" event={"ID":"c8938f71-608b-4cd4-ae9c-3fee7fdcb899","Type":"ContainerStarted","Data":"21c0ad9f4c93b76fe5da8fe9732b3192cc8357b0a014d757c01345a81b55221e"} Apr 24 14:26:30.424427 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:30.424384 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ld8rd" Apr 24 14:26:30.524350 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:30.524317 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-68hzk"] Apr 24 14:26:30.526393 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:30.526375 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-68hzk" Apr 24 14:26:30.529166 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:30.529136 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-dockercfg-8xq7d\"" Apr 24 14:26:30.529342 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:30.529329 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kube-root-ca.crt\"" Apr 24 14:26:30.529450 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:30.529425 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-kube-rbac-proxy-config\"" Apr 24 14:26:30.530384 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:30.530371 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-tls\"" Apr 24 14:26:30.530463 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:30.530407 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"openshift-service-ca.crt\"" Apr 24 14:26:30.530463 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:30.530426 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"metrics-client-ca\"" Apr 24 14:26:30.534241 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:30.534222 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"node-exporter-accelerators-collector-config\"" Apr 24 14:26:30.600733 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:30.600700 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7ddc\" (UniqueName: \"kubernetes.io/projected/22aed69f-edd2-431c-9fc1-a4244441cfaf-kube-api-access-x7ddc\") pod \"node-exporter-68hzk\" (UID: \"22aed69f-edd2-431c-9fc1-a4244441cfaf\") " pod="openshift-monitoring/node-exporter-68hzk" Apr 24 14:26:30.600733 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:30.600738 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/22aed69f-edd2-431c-9fc1-a4244441cfaf-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-68hzk\" (UID: \"22aed69f-edd2-431c-9fc1-a4244441cfaf\") " pod="openshift-monitoring/node-exporter-68hzk" Apr 24 14:26:30.600921 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:30.600762 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/22aed69f-edd2-431c-9fc1-a4244441cfaf-node-exporter-accelerators-collector-config\") pod \"node-exporter-68hzk\" (UID: \"22aed69f-edd2-431c-9fc1-a4244441cfaf\") " pod="openshift-monitoring/node-exporter-68hzk" Apr 24 14:26:30.600921 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:30.600779 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/22aed69f-edd2-431c-9fc1-a4244441cfaf-node-exporter-textfile\") pod \"node-exporter-68hzk\" (UID: \"22aed69f-edd2-431c-9fc1-a4244441cfaf\") " pod="openshift-monitoring/node-exporter-68hzk" Apr 24 14:26:30.600921 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:30.600860 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/22aed69f-edd2-431c-9fc1-a4244441cfaf-sys\") pod \"node-exporter-68hzk\" (UID: \"22aed69f-edd2-431c-9fc1-a4244441cfaf\") " pod="openshift-monitoring/node-exporter-68hzk" Apr 24 14:26:30.600921 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:30.600898 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/22aed69f-edd2-431c-9fc1-a4244441cfaf-node-exporter-wtmp\") pod \"node-exporter-68hzk\" (UID: \"22aed69f-edd2-431c-9fc1-a4244441cfaf\") " pod="openshift-monitoring/node-exporter-68hzk" Apr 24 14:26:30.601041 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:30.600921 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/22aed69f-edd2-431c-9fc1-a4244441cfaf-metrics-client-ca\") pod \"node-exporter-68hzk\" (UID: \"22aed69f-edd2-431c-9fc1-a4244441cfaf\") " pod="openshift-monitoring/node-exporter-68hzk" Apr 24 14:26:30.601041 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:30.600951 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/22aed69f-edd2-431c-9fc1-a4244441cfaf-root\") pod \"node-exporter-68hzk\" (UID: \"22aed69f-edd2-431c-9fc1-a4244441cfaf\") " pod="openshift-monitoring/node-exporter-68hzk" Apr 24 14:26:30.601041 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:30.600966 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/22aed69f-edd2-431c-9fc1-a4244441cfaf-node-exporter-tls\") pod \"node-exporter-68hzk\" (UID: \"22aed69f-edd2-431c-9fc1-a4244441cfaf\") " pod="openshift-monitoring/node-exporter-68hzk" Apr 24 14:26:30.701655 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:30.701575 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x7ddc\" (UniqueName: \"kubernetes.io/projected/22aed69f-edd2-431c-9fc1-a4244441cfaf-kube-api-access-x7ddc\") pod \"node-exporter-68hzk\" (UID: \"22aed69f-edd2-431c-9fc1-a4244441cfaf\") " pod="openshift-monitoring/node-exporter-68hzk" Apr 24 14:26:30.701655 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:30.701620 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/22aed69f-edd2-431c-9fc1-a4244441cfaf-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-68hzk\" (UID: \"22aed69f-edd2-431c-9fc1-a4244441cfaf\") " pod="openshift-monitoring/node-exporter-68hzk" Apr 24 14:26:30.701655 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:30.701649 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/22aed69f-edd2-431c-9fc1-a4244441cfaf-node-exporter-accelerators-collector-config\") pod \"node-exporter-68hzk\" (UID: \"22aed69f-edd2-431c-9fc1-a4244441cfaf\") " pod="openshift-monitoring/node-exporter-68hzk" Apr 24 14:26:30.701948 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:30.701673 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/22aed69f-edd2-431c-9fc1-a4244441cfaf-node-exporter-textfile\") pod \"node-exporter-68hzk\" (UID: \"22aed69f-edd2-431c-9fc1-a4244441cfaf\") " pod="openshift-monitoring/node-exporter-68hzk" Apr 24 14:26:30.701948 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:30.701700 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/22aed69f-edd2-431c-9fc1-a4244441cfaf-sys\") pod \"node-exporter-68hzk\" (UID: \"22aed69f-edd2-431c-9fc1-a4244441cfaf\") " pod="openshift-monitoring/node-exporter-68hzk" Apr 24 14:26:30.701948 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:30.701727 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/22aed69f-edd2-431c-9fc1-a4244441cfaf-node-exporter-wtmp\") pod \"node-exporter-68hzk\" (UID: \"22aed69f-edd2-431c-9fc1-a4244441cfaf\") " pod="openshift-monitoring/node-exporter-68hzk" Apr 24 14:26:30.701948 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:30.701747 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/22aed69f-edd2-431c-9fc1-a4244441cfaf-metrics-client-ca\") pod \"node-exporter-68hzk\" (UID: \"22aed69f-edd2-431c-9fc1-a4244441cfaf\") " pod="openshift-monitoring/node-exporter-68hzk" Apr 24 14:26:30.701948 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:30.701778 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/22aed69f-edd2-431c-9fc1-a4244441cfaf-root\") pod \"node-exporter-68hzk\" (UID: \"22aed69f-edd2-431c-9fc1-a4244441cfaf\") " pod="openshift-monitoring/node-exporter-68hzk" Apr 24 14:26:30.701948 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:30.701802 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/22aed69f-edd2-431c-9fc1-a4244441cfaf-node-exporter-tls\") pod \"node-exporter-68hzk\" (UID: \"22aed69f-edd2-431c-9fc1-a4244441cfaf\") " pod="openshift-monitoring/node-exporter-68hzk" Apr 24 14:26:30.701948 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:30.701805 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/22aed69f-edd2-431c-9fc1-a4244441cfaf-sys\") pod \"node-exporter-68hzk\" (UID: \"22aed69f-edd2-431c-9fc1-a4244441cfaf\") " pod="openshift-monitoring/node-exporter-68hzk" Apr 24 14:26:30.701948 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:30.701946 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/22aed69f-edd2-431c-9fc1-a4244441cfaf-node-exporter-wtmp\") pod \"node-exporter-68hzk\" (UID: \"22aed69f-edd2-431c-9fc1-a4244441cfaf\") " pod="openshift-monitoring/node-exporter-68hzk" Apr 24 14:26:30.702354 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:30.701980 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/22aed69f-edd2-431c-9fc1-a4244441cfaf-root\") pod \"node-exporter-68hzk\" (UID: \"22aed69f-edd2-431c-9fc1-a4244441cfaf\") " pod="openshift-monitoring/node-exporter-68hzk" Apr 24 14:26:30.702354 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:26:30.701952 2570 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Apr 24 14:26:30.702354 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:26:30.702054 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22aed69f-edd2-431c-9fc1-a4244441cfaf-node-exporter-tls podName:22aed69f-edd2-431c-9fc1-a4244441cfaf nodeName:}" failed. No retries permitted until 2026-04-24 14:26:31.202033835 +0000 UTC m=+169.327437631 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/22aed69f-edd2-431c-9fc1-a4244441cfaf-node-exporter-tls") pod "node-exporter-68hzk" (UID: "22aed69f-edd2-431c-9fc1-a4244441cfaf") : secret "node-exporter-tls" not found Apr 24 14:26:30.702354 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:30.702051 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/22aed69f-edd2-431c-9fc1-a4244441cfaf-node-exporter-textfile\") pod \"node-exporter-68hzk\" (UID: \"22aed69f-edd2-431c-9fc1-a4244441cfaf\") " pod="openshift-monitoring/node-exporter-68hzk" Apr 24 14:26:30.702565 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:30.702483 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/22aed69f-edd2-431c-9fc1-a4244441cfaf-node-exporter-accelerators-collector-config\") pod \"node-exporter-68hzk\" (UID: \"22aed69f-edd2-431c-9fc1-a4244441cfaf\") " pod="openshift-monitoring/node-exporter-68hzk" Apr 24 14:26:30.702565 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:30.702509 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/22aed69f-edd2-431c-9fc1-a4244441cfaf-metrics-client-ca\") pod \"node-exporter-68hzk\" (UID: \"22aed69f-edd2-431c-9fc1-a4244441cfaf\") " pod="openshift-monitoring/node-exporter-68hzk" Apr 24 14:26:30.704138 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:30.704120 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/22aed69f-edd2-431c-9fc1-a4244441cfaf-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-68hzk\" (UID: \"22aed69f-edd2-431c-9fc1-a4244441cfaf\") " pod="openshift-monitoring/node-exporter-68hzk" Apr 24 14:26:30.710348 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:30.710330 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7ddc\" (UniqueName: \"kubernetes.io/projected/22aed69f-edd2-431c-9fc1-a4244441cfaf-kube-api-access-x7ddc\") pod \"node-exporter-68hzk\" (UID: \"22aed69f-edd2-431c-9fc1-a4244441cfaf\") " pod="openshift-monitoring/node-exporter-68hzk" Apr 24 14:26:31.002702 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:31.002608 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-t8gtr" event={"ID":"c8938f71-608b-4cd4-ae9c-3fee7fdcb899","Type":"ContainerStarted","Data":"1f3b8889c2dc0cd182807a81f72827e938aac0f62de5f9ef14f67e6d8f3441c6"} Apr 24 14:26:31.003039 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:31.002728 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-t8gtr" Apr 24 14:26:31.022829 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:31.022791 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-t8gtr" podStartSLOduration=136.740791864 podStartE2EDuration="2m18.022778367s" podCreationTimestamp="2026-04-24 14:24:13 +0000 UTC" firstStartedPulling="2026-04-24 14:26:28.57133273 +0000 UTC m=+166.696736524" lastFinishedPulling="2026-04-24 14:26:29.853319218 +0000 UTC m=+167.978723027" observedRunningTime="2026-04-24 14:26:31.022691366 +0000 UTC m=+169.148095181" watchObservedRunningTime="2026-04-24 14:26:31.022778367 +0000 UTC m=+169.148182181" Apr 24 14:26:31.206287 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:31.206246 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/22aed69f-edd2-431c-9fc1-a4244441cfaf-node-exporter-tls\") pod \"node-exporter-68hzk\" (UID: \"22aed69f-edd2-431c-9fc1-a4244441cfaf\") " pod="openshift-monitoring/node-exporter-68hzk" Apr 24 14:26:31.208549 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:31.208520 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/22aed69f-edd2-431c-9fc1-a4244441cfaf-node-exporter-tls\") pod \"node-exporter-68hzk\" (UID: \"22aed69f-edd2-431c-9fc1-a4244441cfaf\") " pod="openshift-monitoring/node-exporter-68hzk" Apr 24 14:26:31.435552 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:31.435528 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-68hzk" Apr 24 14:26:31.443218 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:26:31.443192 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22aed69f_edd2_431c_9fc1_a4244441cfaf.slice/crio-723775f83ddddd11c2842b3a7d3d9189120b1c547e50ba584b9e4fc25bf66e83 WatchSource:0}: Error finding container 723775f83ddddd11c2842b3a7d3d9189120b1c547e50ba584b9e4fc25bf66e83: Status 404 returned error can't find the container with id 723775f83ddddd11c2842b3a7d3d9189120b1c547e50ba584b9e4fc25bf66e83 Apr 24 14:26:32.007544 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:32.007502 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-68hzk" event={"ID":"22aed69f-edd2-431c-9fc1-a4244441cfaf","Type":"ContainerStarted","Data":"723775f83ddddd11c2842b3a7d3d9189120b1c547e50ba584b9e4fc25bf66e83"} Apr 24 14:26:33.011445 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:33.011403 2570 generic.go:358] "Generic (PLEG): container finished" podID="22aed69f-edd2-431c-9fc1-a4244441cfaf" containerID="ece47c87f00db6b4db76393b8b36b66cfe865b8eee6724f744ad36bfffc71ada" exitCode=0 Apr 24 14:26:33.011823 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:33.011472 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-68hzk" event={"ID":"22aed69f-edd2-431c-9fc1-a4244441cfaf","Type":"ContainerDied","Data":"ece47c87f00db6b4db76393b8b36b66cfe865b8eee6724f744ad36bfffc71ada"} Apr 24 14:26:34.015397 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:34.015361 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-68hzk" event={"ID":"22aed69f-edd2-431c-9fc1-a4244441cfaf","Type":"ContainerStarted","Data":"0d2cdb2b6b8dbb7fccfd7fc4488aad137144ad9d26b7e5eeba6a9f7fdc8edca3"} Apr 24 14:26:34.015397 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:34.015398 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-68hzk" event={"ID":"22aed69f-edd2-431c-9fc1-a4244441cfaf","Type":"ContainerStarted","Data":"1754d8480d2f7dd139e806cb412396613948d8102cd64f2ad22625d6e2f495e2"} Apr 24 14:26:34.034441 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:34.034393 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-68hzk" podStartSLOduration=3.434933029 podStartE2EDuration="4.034380286s" podCreationTimestamp="2026-04-24 14:26:30 +0000 UTC" firstStartedPulling="2026-04-24 14:26:31.444917335 +0000 UTC m=+169.570321127" lastFinishedPulling="2026-04-24 14:26:32.044364591 +0000 UTC m=+170.169768384" observedRunningTime="2026-04-24 14:26:34.033214579 +0000 UTC m=+172.158618397" watchObservedRunningTime="2026-04-24 14:26:34.034380286 +0000 UTC m=+172.159784098" Apr 24 14:26:41.009523 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:41.009492 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-t8gtr" Apr 24 14:26:42.062621 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:42.062578 2570 patch_prober.go:28] interesting pod/image-registry-fcd55b66b-r86xn container/registry namespace/openshift-image-registry: Liveness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 24 14:26:42.062989 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:42.062641 2570 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" podUID="0123d5b7-b24f-4266-a1e2-30653ee3b093" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 14:26:43.983378 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:43.983349 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:26:48.864502 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:48.864465 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-fcd55b66b-r86xn"] Apr 24 14:26:54.177380 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:26:54.177339 2570 prober.go:120] "Probe failed" probeType="Liveness" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-c8d9cff7c-7mz9c" podUID="f618f048-77b0-41f7-a1c7-f3a7816c9456" containerName="service-proxy" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 24 14:27:04.177682 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:04.177640 2570 prober.go:120] "Probe failed" probeType="Liveness" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-c8d9cff7c-7mz9c" podUID="f618f048-77b0-41f7-a1c7-f3a7816c9456" containerName="service-proxy" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 24 14:27:12.375523 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:12.375496 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-68hzk_22aed69f-edd2-431c-9fc1-a4244441cfaf/init-textfile/0.log" Apr 24 14:27:12.574418 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:12.574393 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-68hzk_22aed69f-edd2-431c-9fc1-a4244441cfaf/node-exporter/0.log" Apr 24 14:27:12.773590 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:12.773569 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-68hzk_22aed69f-edd2-431c-9fc1-a4244441cfaf/kube-rbac-proxy/0.log" Apr 24 14:27:13.885680 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:13.885623 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" podUID="0123d5b7-b24f-4266-a1e2-30653ee3b093" containerName="registry" containerID="cri-o://916e0bf1281e497d379aef57eeec6cbcc79fd384381662bd7c3cdf87a632bffb" gracePeriod=30 Apr 24 14:27:13.980647 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:13.980620 2570 patch_prober.go:28] interesting pod/image-registry-fcd55b66b-r86xn container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.132.0.9:5000/healthz\": dial tcp 10.132.0.9:5000: connect: connection refused" start-of-body= Apr 24 14:27:13.980761 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:13.980681 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" podUID="0123d5b7-b24f-4266-a1e2-30653ee3b093" containerName="registry" probeResult="failure" output="Get \"https://10.132.0.9:5000/healthz\": dial tcp 10.132.0.9:5000: connect: connection refused" Apr 24 14:27:14.120631 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.120607 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:27:14.121281 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.121258 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0123d5b7-b24f-4266-a1e2-30653ee3b093-installation-pull-secrets\") pod \"0123d5b7-b24f-4266-a1e2-30653ee3b093\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " Apr 24 14:27:14.121419 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.121292 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0123d5b7-b24f-4266-a1e2-30653ee3b093-trusted-ca\") pod \"0123d5b7-b24f-4266-a1e2-30653ee3b093\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " Apr 24 14:27:14.121419 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.121309 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0123d5b7-b24f-4266-a1e2-30653ee3b093-registry-certificates\") pod \"0123d5b7-b24f-4266-a1e2-30653ee3b093\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " Apr 24 14:27:14.121419 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.121345 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0123d5b7-b24f-4266-a1e2-30653ee3b093-ca-trust-extracted\") pod \"0123d5b7-b24f-4266-a1e2-30653ee3b093\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " Apr 24 14:27:14.121419 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.121391 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/0123d5b7-b24f-4266-a1e2-30653ee3b093-image-registry-private-configuration\") pod \"0123d5b7-b24f-4266-a1e2-30653ee3b093\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " Apr 24 14:27:14.121639 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.121434 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crt98\" (UniqueName: \"kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-kube-api-access-crt98\") pod \"0123d5b7-b24f-4266-a1e2-30653ee3b093\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " Apr 24 14:27:14.121639 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.121469 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-registry-tls\") pod \"0123d5b7-b24f-4266-a1e2-30653ee3b093\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " Apr 24 14:27:14.121639 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.121497 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-bound-sa-token\") pod \"0123d5b7-b24f-4266-a1e2-30653ee3b093\" (UID: \"0123d5b7-b24f-4266-a1e2-30653ee3b093\") " Apr 24 14:27:14.121786 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.121721 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0123d5b7-b24f-4266-a1e2-30653ee3b093-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "0123d5b7-b24f-4266-a1e2-30653ee3b093" (UID: "0123d5b7-b24f-4266-a1e2-30653ee3b093"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 14:27:14.121786 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.121749 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0123d5b7-b24f-4266-a1e2-30653ee3b093-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "0123d5b7-b24f-4266-a1e2-30653ee3b093" (UID: "0123d5b7-b24f-4266-a1e2-30653ee3b093"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 14:27:14.122961 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.122929 2570 generic.go:358] "Generic (PLEG): container finished" podID="0123d5b7-b24f-4266-a1e2-30653ee3b093" containerID="916e0bf1281e497d379aef57eeec6cbcc79fd384381662bd7c3cdf87a632bffb" exitCode=0 Apr 24 14:27:14.123086 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.123018 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" event={"ID":"0123d5b7-b24f-4266-a1e2-30653ee3b093","Type":"ContainerDied","Data":"916e0bf1281e497d379aef57eeec6cbcc79fd384381662bd7c3cdf87a632bffb"} Apr 24 14:27:14.123086 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.123046 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" event={"ID":"0123d5b7-b24f-4266-a1e2-30653ee3b093","Type":"ContainerDied","Data":"0f86d7c82e8af997ab580d60e1ebdf583984293888d6ddb84ba325612b13b710"} Apr 24 14:27:14.123086 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.123067 2570 scope.go:117] "RemoveContainer" containerID="916e0bf1281e497d379aef57eeec6cbcc79fd384381662bd7c3cdf87a632bffb" Apr 24 14:27:14.123279 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.123223 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-fcd55b66b-r86xn" Apr 24 14:27:14.124151 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.124073 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "0123d5b7-b24f-4266-a1e2-30653ee3b093" (UID: "0123d5b7-b24f-4266-a1e2-30653ee3b093"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 14:27:14.124316 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.124282 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-kube-api-access-crt98" (OuterVolumeSpecName: "kube-api-access-crt98") pod "0123d5b7-b24f-4266-a1e2-30653ee3b093" (UID: "0123d5b7-b24f-4266-a1e2-30653ee3b093"). InnerVolumeSpecName "kube-api-access-crt98". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 14:27:14.124418 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.124362 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "0123d5b7-b24f-4266-a1e2-30653ee3b093" (UID: "0123d5b7-b24f-4266-a1e2-30653ee3b093"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 14:27:14.124542 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.124455 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0123d5b7-b24f-4266-a1e2-30653ee3b093-image-registry-private-configuration" (OuterVolumeSpecName: "image-registry-private-configuration") pod "0123d5b7-b24f-4266-a1e2-30653ee3b093" (UID: "0123d5b7-b24f-4266-a1e2-30653ee3b093"). InnerVolumeSpecName "image-registry-private-configuration". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 14:27:14.124707 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.124688 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0123d5b7-b24f-4266-a1e2-30653ee3b093-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "0123d5b7-b24f-4266-a1e2-30653ee3b093" (UID: "0123d5b7-b24f-4266-a1e2-30653ee3b093"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 14:27:14.132740 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.132714 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0123d5b7-b24f-4266-a1e2-30653ee3b093-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "0123d5b7-b24f-4266-a1e2-30653ee3b093" (UID: "0123d5b7-b24f-4266-a1e2-30653ee3b093"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 24 14:27:14.138497 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.138481 2570 scope.go:117] "RemoveContainer" containerID="916e0bf1281e497d379aef57eeec6cbcc79fd384381662bd7c3cdf87a632bffb" Apr 24 14:27:14.138753 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:27:14.138728 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"916e0bf1281e497d379aef57eeec6cbcc79fd384381662bd7c3cdf87a632bffb\": container with ID starting with 916e0bf1281e497d379aef57eeec6cbcc79fd384381662bd7c3cdf87a632bffb not found: ID does not exist" containerID="916e0bf1281e497d379aef57eeec6cbcc79fd384381662bd7c3cdf87a632bffb" Apr 24 14:27:14.138808 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.138756 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"916e0bf1281e497d379aef57eeec6cbcc79fd384381662bd7c3cdf87a632bffb"} err="failed to get container status \"916e0bf1281e497d379aef57eeec6cbcc79fd384381662bd7c3cdf87a632bffb\": rpc error: code = NotFound desc = could not find container \"916e0bf1281e497d379aef57eeec6cbcc79fd384381662bd7c3cdf87a632bffb\": container with ID starting with 916e0bf1281e497d379aef57eeec6cbcc79fd384381662bd7c3cdf87a632bffb not found: ID does not exist" Apr 24 14:27:14.176832 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.176801 2570 prober.go:120] "Probe failed" probeType="Liveness" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-c8d9cff7c-7mz9c" podUID="f618f048-77b0-41f7-a1c7-f3a7816c9456" containerName="service-proxy" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 24 14:27:14.176941 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.176876 2570 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-c8d9cff7c-7mz9c" Apr 24 14:27:14.177445 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.177427 2570 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="service-proxy" containerStatusID={"Type":"cri-o","ID":"b070e439062c381e6830aa739e6377627798d983afa53bafd71bd6d05095886e"} pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-c8d9cff7c-7mz9c" containerMessage="Container service-proxy failed liveness probe, will be restarted" Apr 24 14:27:14.177499 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.177467 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-c8d9cff7c-7mz9c" podUID="f618f048-77b0-41f7-a1c7-f3a7816c9456" containerName="service-proxy" containerID="cri-o://b070e439062c381e6830aa739e6377627798d983afa53bafd71bd6d05095886e" gracePeriod=30 Apr 24 14:27:14.222604 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.222574 2570 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0123d5b7-b24f-4266-a1e2-30653ee3b093-ca-trust-extracted\") on node \"ip-10-0-129-34.ec2.internal\" DevicePath \"\"" Apr 24 14:27:14.222604 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.222602 2570 reconciler_common.go:299] "Volume detached for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/0123d5b7-b24f-4266-a1e2-30653ee3b093-image-registry-private-configuration\") on node \"ip-10-0-129-34.ec2.internal\" DevicePath \"\"" Apr 24 14:27:14.222782 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.222617 2570 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-crt98\" (UniqueName: \"kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-kube-api-access-crt98\") on node \"ip-10-0-129-34.ec2.internal\" DevicePath \"\"" Apr 24 14:27:14.222782 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.222633 2570 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-registry-tls\") on node \"ip-10-0-129-34.ec2.internal\" DevicePath \"\"" Apr 24 14:27:14.222782 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.222649 2570 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0123d5b7-b24f-4266-a1e2-30653ee3b093-bound-sa-token\") on node \"ip-10-0-129-34.ec2.internal\" DevicePath \"\"" Apr 24 14:27:14.222782 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.222664 2570 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0123d5b7-b24f-4266-a1e2-30653ee3b093-installation-pull-secrets\") on node \"ip-10-0-129-34.ec2.internal\" DevicePath \"\"" Apr 24 14:27:14.222782 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.222677 2570 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0123d5b7-b24f-4266-a1e2-30653ee3b093-trusted-ca\") on node \"ip-10-0-129-34.ec2.internal\" DevicePath \"\"" Apr 24 14:27:14.222782 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.222692 2570 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0123d5b7-b24f-4266-a1e2-30653ee3b093-registry-certificates\") on node \"ip-10-0-129-34.ec2.internal\" DevicePath \"\"" Apr 24 14:27:14.445070 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.445042 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-fcd55b66b-r86xn"] Apr 24 14:27:14.448253 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:14.448234 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-fcd55b66b-r86xn"] Apr 24 14:27:15.126891 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:15.126858 2570 generic.go:358] "Generic (PLEG): container finished" podID="f618f048-77b0-41f7-a1c7-f3a7816c9456" containerID="b070e439062c381e6830aa739e6377627798d983afa53bafd71bd6d05095886e" exitCode=2 Apr 24 14:27:15.127310 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:15.126926 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-c8d9cff7c-7mz9c" event={"ID":"f618f048-77b0-41f7-a1c7-f3a7816c9456","Type":"ContainerDied","Data":"b070e439062c381e6830aa739e6377627798d983afa53bafd71bd6d05095886e"} Apr 24 14:27:15.127310 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:15.126960 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-c8d9cff7c-7mz9c" event={"ID":"f618f048-77b0-41f7-a1c7-f3a7816c9456","Type":"ContainerStarted","Data":"7e8b0c9d9e43c2456ba3aa95388d87faefef8cfef606cb1ade577e7c24418329"} Apr 24 14:27:16.430411 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:16.430378 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0123d5b7-b24f-4266-a1e2-30653ee3b093" path="/var/lib/kubelet/pods/0123d5b7-b24f-4266-a1e2-30653ee3b093/volumes" Apr 24 14:27:19.974115 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:19.974068 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-9wh5p_fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65/serve-healthcheck-canary/0.log" Apr 24 14:27:53.296024 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:53.295984 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/62277dce-4b78-4158-9951-1292c0fa443c-metrics-certs\") pod \"network-metrics-daemon-ld8rd\" (UID: \"62277dce-4b78-4158-9951-1292c0fa443c\") " pod="openshift-multus/network-metrics-daemon-ld8rd" Apr 24 14:27:53.298270 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:53.298253 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/62277dce-4b78-4158-9951-1292c0fa443c-metrics-certs\") pod \"network-metrics-daemon-ld8rd\" (UID: \"62277dce-4b78-4158-9951-1292c0fa443c\") " pod="openshift-multus/network-metrics-daemon-ld8rd" Apr 24 14:27:53.527972 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:53.527938 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-68j6s\"" Apr 24 14:27:53.536223 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:53.536203 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ld8rd" Apr 24 14:27:53.648992 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:53.648957 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-ld8rd"] Apr 24 14:27:53.651937 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:27:53.651898 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62277dce_4b78_4158_9951_1292c0fa443c.slice/crio-274c478c722a0b198d709a1091673ace9b80395be5fd320ad030b46b8bb8c516 WatchSource:0}: Error finding container 274c478c722a0b198d709a1091673ace9b80395be5fd320ad030b46b8bb8c516: Status 404 returned error can't find the container with id 274c478c722a0b198d709a1091673ace9b80395be5fd320ad030b46b8bb8c516 Apr 24 14:27:54.226960 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:54.226915 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ld8rd" event={"ID":"62277dce-4b78-4158-9951-1292c0fa443c","Type":"ContainerStarted","Data":"274c478c722a0b198d709a1091673ace9b80395be5fd320ad030b46b8bb8c516"} Apr 24 14:27:55.231138 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:55.231088 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ld8rd" event={"ID":"62277dce-4b78-4158-9951-1292c0fa443c","Type":"ContainerStarted","Data":"7ced98c72944b9dddda24f29bea47d67b2e14c28becb18641883bcab91e29e43"} Apr 24 14:27:55.231138 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:55.231137 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ld8rd" event={"ID":"62277dce-4b78-4158-9951-1292c0fa443c","Type":"ContainerStarted","Data":"eaa8e78a74beeb89a765b9c8174eb682ee4df876ad9f17a1fed342cec95c0ae7"} Apr 24 14:27:55.251294 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:27:55.251243 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-ld8rd" podStartSLOduration=252.085509108 podStartE2EDuration="4m13.251230422s" podCreationTimestamp="2026-04-24 14:23:42 +0000 UTC" firstStartedPulling="2026-04-24 14:27:53.653789094 +0000 UTC m=+251.779192890" lastFinishedPulling="2026-04-24 14:27:54.819510399 +0000 UTC m=+252.944914204" observedRunningTime="2026-04-24 14:27:55.249587952 +0000 UTC m=+253.374991777" watchObservedRunningTime="2026-04-24 14:27:55.251230422 +0000 UTC m=+253.376634236" Apr 24 14:28:42.340488 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:28:42.340457 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dz989_090e3afb-c111-4bf0-a107-0156c2f3a0f2/ovn-acl-logging/0.log" Apr 24 14:28:42.340974 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:28:42.340732 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dz989_090e3afb-c111-4bf0-a107-0156c2f3a0f2/ovn-acl-logging/0.log" Apr 24 14:28:42.344442 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:28:42.344427 2570 kubelet.go:1628] "Image garbage collection succeeded" Apr 24 14:29:37.785944 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:37.785844 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/keda-metrics-apiserver-7c9f485588-ntc2q"] Apr 24 14:29:37.786330 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:37.786155 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0123d5b7-b24f-4266-a1e2-30653ee3b093" containerName="registry" Apr 24 14:29:37.786330 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:37.786167 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="0123d5b7-b24f-4266-a1e2-30653ee3b093" containerName="registry" Apr 24 14:29:37.786330 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:37.786218 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="0123d5b7-b24f-4266-a1e2-30653ee3b093" containerName="registry" Apr 24 14:29:37.788387 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:37.788367 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ntc2q" Apr 24 14:29:37.791369 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:37.791349 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-metrics-apiserver-certs\"" Apr 24 14:29:37.791369 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:37.791365 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"kube-root-ca.crt\"" Apr 24 14:29:37.792596 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:37.792572 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-operator-dockercfg-8cjc6\"" Apr 24 14:29:37.792692 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:37.792638 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"kedaorg-certs\"" Apr 24 14:29:37.792692 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:37.792641 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"keda-ocp-cabundle\"" Apr 24 14:29:37.792824 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:37.792742 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"openshift-service-ca.crt\"" Apr 24 14:29:37.799695 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:37.799675 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-metrics-apiserver-7c9f485588-ntc2q"] Apr 24 14:29:37.805060 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:37.805041 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"temp-vol\" (UniqueName: \"kubernetes.io/empty-dir/2091da71-b9f6-43f1-b580-a2b2d431275a-temp-vol\") pod \"keda-metrics-apiserver-7c9f485588-ntc2q\" (UID: \"2091da71-b9f6-43f1-b580-a2b2d431275a\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ntc2q" Apr 24 14:29:37.805160 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:37.805068 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/2091da71-b9f6-43f1-b580-a2b2d431275a-certificates\") pod \"keda-metrics-apiserver-7c9f485588-ntc2q\" (UID: \"2091da71-b9f6-43f1-b580-a2b2d431275a\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ntc2q" Apr 24 14:29:37.805160 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:37.805090 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz47b\" (UniqueName: \"kubernetes.io/projected/2091da71-b9f6-43f1-b580-a2b2d431275a-kube-api-access-dz47b\") pod \"keda-metrics-apiserver-7c9f485588-ntc2q\" (UID: \"2091da71-b9f6-43f1-b580-a2b2d431275a\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ntc2q" Apr 24 14:29:37.905916 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:37.905877 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"temp-vol\" (UniqueName: \"kubernetes.io/empty-dir/2091da71-b9f6-43f1-b580-a2b2d431275a-temp-vol\") pod \"keda-metrics-apiserver-7c9f485588-ntc2q\" (UID: \"2091da71-b9f6-43f1-b580-a2b2d431275a\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ntc2q" Apr 24 14:29:37.905916 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:37.905923 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/2091da71-b9f6-43f1-b580-a2b2d431275a-certificates\") pod \"keda-metrics-apiserver-7c9f485588-ntc2q\" (UID: \"2091da71-b9f6-43f1-b580-a2b2d431275a\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ntc2q" Apr 24 14:29:37.906178 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:37.905992 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dz47b\" (UniqueName: \"kubernetes.io/projected/2091da71-b9f6-43f1-b580-a2b2d431275a-kube-api-access-dz47b\") pod \"keda-metrics-apiserver-7c9f485588-ntc2q\" (UID: \"2091da71-b9f6-43f1-b580-a2b2d431275a\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ntc2q" Apr 24 14:29:37.906178 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:29:37.906048 2570 secret.go:281] references non-existent secret key: tls.crt Apr 24 14:29:37.906178 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:29:37.906064 2570 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 24 14:29:37.906178 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:29:37.906086 2570 projected.go:264] Couldn't get secret openshift-keda/keda-metrics-apiserver-certs: secret "keda-metrics-apiserver-certs" not found Apr 24 14:29:37.906178 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:29:37.906129 2570 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-ntc2q: [references non-existent secret key: tls.crt, secret "keda-metrics-apiserver-certs" not found] Apr 24 14:29:37.906335 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:29:37.906185 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2091da71-b9f6-43f1-b580-a2b2d431275a-certificates podName:2091da71-b9f6-43f1-b580-a2b2d431275a nodeName:}" failed. No retries permitted until 2026-04-24 14:29:38.406165533 +0000 UTC m=+356.531569346 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/2091da71-b9f6-43f1-b580-a2b2d431275a-certificates") pod "keda-metrics-apiserver-7c9f485588-ntc2q" (UID: "2091da71-b9f6-43f1-b580-a2b2d431275a") : [references non-existent secret key: tls.crt, secret "keda-metrics-apiserver-certs" not found] Apr 24 14:29:37.906478 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:37.906461 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"temp-vol\" (UniqueName: \"kubernetes.io/empty-dir/2091da71-b9f6-43f1-b580-a2b2d431275a-temp-vol\") pod \"keda-metrics-apiserver-7c9f485588-ntc2q\" (UID: \"2091da71-b9f6-43f1-b580-a2b2d431275a\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ntc2q" Apr 24 14:29:37.915133 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:37.915070 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz47b\" (UniqueName: \"kubernetes.io/projected/2091da71-b9f6-43f1-b580-a2b2d431275a-kube-api-access-dz47b\") pod \"keda-metrics-apiserver-7c9f485588-ntc2q\" (UID: \"2091da71-b9f6-43f1-b580-a2b2d431275a\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ntc2q" Apr 24 14:29:38.087720 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:38.087648 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/keda-admission-cf49989db-j2k27"] Apr 24 14:29:38.089874 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:38.089859 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-admission-cf49989db-j2k27" Apr 24 14:29:38.092519 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:38.092498 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-admission-webhooks-certs\"" Apr 24 14:29:38.098164 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:38.098145 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-admission-cf49989db-j2k27"] Apr 24 14:29:38.107402 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:38.107383 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpt8h\" (UniqueName: \"kubernetes.io/projected/a15abe33-b4eb-49a9-a168-73c648ab3ece-kube-api-access-xpt8h\") pod \"keda-admission-cf49989db-j2k27\" (UID: \"a15abe33-b4eb-49a9-a168-73c648ab3ece\") " pod="openshift-keda/keda-admission-cf49989db-j2k27" Apr 24 14:29:38.107506 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:38.107443 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/a15abe33-b4eb-49a9-a168-73c648ab3ece-certificates\") pod \"keda-admission-cf49989db-j2k27\" (UID: \"a15abe33-b4eb-49a9-a168-73c648ab3ece\") " pod="openshift-keda/keda-admission-cf49989db-j2k27" Apr 24 14:29:38.208655 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:38.208626 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xpt8h\" (UniqueName: \"kubernetes.io/projected/a15abe33-b4eb-49a9-a168-73c648ab3ece-kube-api-access-xpt8h\") pod \"keda-admission-cf49989db-j2k27\" (UID: \"a15abe33-b4eb-49a9-a168-73c648ab3ece\") " pod="openshift-keda/keda-admission-cf49989db-j2k27" Apr 24 14:29:38.208832 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:38.208679 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/a15abe33-b4eb-49a9-a168-73c648ab3ece-certificates\") pod \"keda-admission-cf49989db-j2k27\" (UID: \"a15abe33-b4eb-49a9-a168-73c648ab3ece\") " pod="openshift-keda/keda-admission-cf49989db-j2k27" Apr 24 14:29:38.210879 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:38.210861 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/a15abe33-b4eb-49a9-a168-73c648ab3ece-certificates\") pod \"keda-admission-cf49989db-j2k27\" (UID: \"a15abe33-b4eb-49a9-a168-73c648ab3ece\") " pod="openshift-keda/keda-admission-cf49989db-j2k27" Apr 24 14:29:38.217035 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:38.217017 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpt8h\" (UniqueName: \"kubernetes.io/projected/a15abe33-b4eb-49a9-a168-73c648ab3ece-kube-api-access-xpt8h\") pod \"keda-admission-cf49989db-j2k27\" (UID: \"a15abe33-b4eb-49a9-a168-73c648ab3ece\") " pod="openshift-keda/keda-admission-cf49989db-j2k27" Apr 24 14:29:38.400530 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:38.400431 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-admission-cf49989db-j2k27" Apr 24 14:29:38.410336 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:38.410314 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/2091da71-b9f6-43f1-b580-a2b2d431275a-certificates\") pod \"keda-metrics-apiserver-7c9f485588-ntc2q\" (UID: \"2091da71-b9f6-43f1-b580-a2b2d431275a\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ntc2q" Apr 24 14:29:38.410459 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:29:38.410423 2570 secret.go:281] references non-existent secret key: tls.crt Apr 24 14:29:38.410459 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:29:38.410436 2570 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 24 14:29:38.410459 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:29:38.410454 2570 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-ntc2q: references non-existent secret key: tls.crt Apr 24 14:29:38.410558 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:29:38.410515 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2091da71-b9f6-43f1-b580-a2b2d431275a-certificates podName:2091da71-b9f6-43f1-b580-a2b2d431275a nodeName:}" failed. No retries permitted until 2026-04-24 14:29:39.410496976 +0000 UTC m=+357.535900769 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/2091da71-b9f6-43f1-b580-a2b2d431275a-certificates") pod "keda-metrics-apiserver-7c9f485588-ntc2q" (UID: "2091da71-b9f6-43f1-b580-a2b2d431275a") : references non-existent secret key: tls.crt Apr 24 14:29:38.516368 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:38.516339 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-admission-cf49989db-j2k27"] Apr 24 14:29:38.519450 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:29:38.519426 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda15abe33_b4eb_49a9_a168_73c648ab3ece.slice/crio-17e78c9ecf188a608c11f0d2a5df395bd9f7a1c41a590d25f8a1b9b268b98947 WatchSource:0}: Error finding container 17e78c9ecf188a608c11f0d2a5df395bd9f7a1c41a590d25f8a1b9b268b98947: Status 404 returned error can't find the container with id 17e78c9ecf188a608c11f0d2a5df395bd9f7a1c41a590d25f8a1b9b268b98947 Apr 24 14:29:38.520565 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:38.520550 2570 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 24 14:29:39.417185 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:39.417148 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/2091da71-b9f6-43f1-b580-a2b2d431275a-certificates\") pod \"keda-metrics-apiserver-7c9f485588-ntc2q\" (UID: \"2091da71-b9f6-43f1-b580-a2b2d431275a\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ntc2q" Apr 24 14:29:39.417665 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:29:39.417312 2570 secret.go:281] references non-existent secret key: tls.crt Apr 24 14:29:39.417665 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:29:39.417336 2570 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 24 14:29:39.417665 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:29:39.417362 2570 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-ntc2q: references non-existent secret key: tls.crt Apr 24 14:29:39.417665 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:29:39.417428 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2091da71-b9f6-43f1-b580-a2b2d431275a-certificates podName:2091da71-b9f6-43f1-b580-a2b2d431275a nodeName:}" failed. No retries permitted until 2026-04-24 14:29:41.41740891 +0000 UTC m=+359.542812717 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/2091da71-b9f6-43f1-b580-a2b2d431275a-certificates") pod "keda-metrics-apiserver-7c9f485588-ntc2q" (UID: "2091da71-b9f6-43f1-b580-a2b2d431275a") : references non-existent secret key: tls.crt Apr 24 14:29:39.500569 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:39.500526 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-admission-cf49989db-j2k27" event={"ID":"a15abe33-b4eb-49a9-a168-73c648ab3ece","Type":"ContainerStarted","Data":"17e78c9ecf188a608c11f0d2a5df395bd9f7a1c41a590d25f8a1b9b268b98947"} Apr 24 14:29:41.434531 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:41.434483 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/2091da71-b9f6-43f1-b580-a2b2d431275a-certificates\") pod \"keda-metrics-apiserver-7c9f485588-ntc2q\" (UID: \"2091da71-b9f6-43f1-b580-a2b2d431275a\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ntc2q" Apr 24 14:29:41.434921 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:29:41.434621 2570 secret.go:281] references non-existent secret key: tls.crt Apr 24 14:29:41.434921 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:29:41.434639 2570 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 24 14:29:41.434921 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:29:41.434657 2570 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-ntc2q: references non-existent secret key: tls.crt Apr 24 14:29:41.434921 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:29:41.434705 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2091da71-b9f6-43f1-b580-a2b2d431275a-certificates podName:2091da71-b9f6-43f1-b580-a2b2d431275a nodeName:}" failed. No retries permitted until 2026-04-24 14:29:45.434691687 +0000 UTC m=+363.560095480 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/2091da71-b9f6-43f1-b580-a2b2d431275a-certificates") pod "keda-metrics-apiserver-7c9f485588-ntc2q" (UID: "2091da71-b9f6-43f1-b580-a2b2d431275a") : references non-existent secret key: tls.crt Apr 24 14:29:41.507349 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:41.507303 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-admission-cf49989db-j2k27" event={"ID":"a15abe33-b4eb-49a9-a168-73c648ab3ece","Type":"ContainerStarted","Data":"d111020acdca8c29640e4ae992f194d90702dcf9f9d03bd3ccee2184d36c952a"} Apr 24 14:29:41.507498 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:41.507402 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-admission-cf49989db-j2k27" Apr 24 14:29:41.524815 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:41.524768 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/keda-admission-cf49989db-j2k27" podStartSLOduration=1.340255822 podStartE2EDuration="3.524754664s" podCreationTimestamp="2026-04-24 14:29:38 +0000 UTC" firstStartedPulling="2026-04-24 14:29:38.5206709 +0000 UTC m=+356.646074692" lastFinishedPulling="2026-04-24 14:29:40.70516973 +0000 UTC m=+358.830573534" observedRunningTime="2026-04-24 14:29:41.523385751 +0000 UTC m=+359.648789566" watchObservedRunningTime="2026-04-24 14:29:41.524754664 +0000 UTC m=+359.650158477" Apr 24 14:29:45.462807 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:45.462754 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/2091da71-b9f6-43f1-b580-a2b2d431275a-certificates\") pod \"keda-metrics-apiserver-7c9f485588-ntc2q\" (UID: \"2091da71-b9f6-43f1-b580-a2b2d431275a\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ntc2q" Apr 24 14:29:45.465247 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:45.465228 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/2091da71-b9f6-43f1-b580-a2b2d431275a-certificates\") pod \"keda-metrics-apiserver-7c9f485588-ntc2q\" (UID: \"2091da71-b9f6-43f1-b580-a2b2d431275a\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ntc2q" Apr 24 14:29:45.597815 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:45.597775 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ntc2q" Apr 24 14:29:45.710586 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:45.710558 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-metrics-apiserver-7c9f485588-ntc2q"] Apr 24 14:29:45.712903 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:29:45.712842 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2091da71_b9f6_43f1_b580_a2b2d431275a.slice/crio-64c6e45b0e58974f185dfe0b3b8415670be6722bfc3d5b6b97c7ced8f27fbc34 WatchSource:0}: Error finding container 64c6e45b0e58974f185dfe0b3b8415670be6722bfc3d5b6b97c7ced8f27fbc34: Status 404 returned error can't find the container with id 64c6e45b0e58974f185dfe0b3b8415670be6722bfc3d5b6b97c7ced8f27fbc34 Apr 24 14:29:46.521260 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:46.521221 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ntc2q" event={"ID":"2091da71-b9f6-43f1-b580-a2b2d431275a","Type":"ContainerStarted","Data":"64c6e45b0e58974f185dfe0b3b8415670be6722bfc3d5b6b97c7ced8f27fbc34"} Apr 24 14:29:48.527050 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:48.527014 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ntc2q" event={"ID":"2091da71-b9f6-43f1-b580-a2b2d431275a","Type":"ContainerStarted","Data":"599be78923f7218e6f1ba65187b99afeadddaf716e4e2a19fa7e2b2ef8dcf4f1"} Apr 24 14:29:48.527422 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:48.527227 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ntc2q" Apr 24 14:29:48.543858 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:48.543802 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ntc2q" podStartSLOduration=9.396151426 podStartE2EDuration="11.543787671s" podCreationTimestamp="2026-04-24 14:29:37 +0000 UTC" firstStartedPulling="2026-04-24 14:29:45.714067763 +0000 UTC m=+363.839471556" lastFinishedPulling="2026-04-24 14:29:47.861703992 +0000 UTC m=+365.987107801" observedRunningTime="2026-04-24 14:29:48.54320217 +0000 UTC m=+366.668605985" watchObservedRunningTime="2026-04-24 14:29:48.543787671 +0000 UTC m=+366.669191486" Apr 24 14:29:59.534489 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:29:59.534460 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-ntc2q" Apr 24 14:30:02.511721 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:02.511682 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-admission-cf49989db-j2k27" Apr 24 14:30:43.896595 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:43.896560 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/kserve-controller-manager-b7dc77d59-lqnh5"] Apr 24 14:30:43.898541 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:43.898523 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-b7dc77d59-lqnh5" Apr 24 14:30:43.901315 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:43.901296 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"kserve-controller-manager-dockercfg-qz5pf\"" Apr 24 14:30:43.901467 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:43.901447 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"kserve-webhook-server-cert\"" Apr 24 14:30:43.902611 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:43.902595 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"openshift-service-ca.crt\"" Apr 24 14:30:43.902679 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:43.902634 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"kube-root-ca.crt\"" Apr 24 14:30:43.908707 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:43.908686 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-b7dc77d59-lqnh5"] Apr 24 14:30:43.931674 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:43.931644 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/seaweedfs-86cc847c5c-c8vhf"] Apr 24 14:30:43.933648 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:43.933629 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/seaweedfs-86cc847c5c-c8vhf" Apr 24 14:30:43.936349 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:43.936328 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"mlpipeline-s3-artifact\"" Apr 24 14:30:43.936445 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:43.936371 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"default-dockercfg-rhgm5\"" Apr 24 14:30:43.945674 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:43.945656 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/seaweedfs-86cc847c5c-c8vhf"] Apr 24 14:30:43.968189 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:43.968167 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/c9305c6b-7c90-4218-aac7-c0b4903e2674-data\") pod \"seaweedfs-86cc847c5c-c8vhf\" (UID: \"c9305c6b-7c90-4218-aac7-c0b4903e2674\") " pod="kserve/seaweedfs-86cc847c5c-c8vhf" Apr 24 14:30:43.968282 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:43.968202 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxmpz\" (UniqueName: \"kubernetes.io/projected/38a64272-4634-4aec-94bb-b5f148a82da4-kube-api-access-bxmpz\") pod \"kserve-controller-manager-b7dc77d59-lqnh5\" (UID: \"38a64272-4634-4aec-94bb-b5f148a82da4\") " pod="kserve/kserve-controller-manager-b7dc77d59-lqnh5" Apr 24 14:30:43.968282 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:43.968224 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/38a64272-4634-4aec-94bb-b5f148a82da4-cert\") pod \"kserve-controller-manager-b7dc77d59-lqnh5\" (UID: \"38a64272-4634-4aec-94bb-b5f148a82da4\") " pod="kserve/kserve-controller-manager-b7dc77d59-lqnh5" Apr 24 14:30:43.968282 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:43.968254 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv8mn\" (UniqueName: \"kubernetes.io/projected/c9305c6b-7c90-4218-aac7-c0b4903e2674-kube-api-access-pv8mn\") pod \"seaweedfs-86cc847c5c-c8vhf\" (UID: \"c9305c6b-7c90-4218-aac7-c0b4903e2674\") " pod="kserve/seaweedfs-86cc847c5c-c8vhf" Apr 24 14:30:44.068892 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:44.068864 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/c9305c6b-7c90-4218-aac7-c0b4903e2674-data\") pod \"seaweedfs-86cc847c5c-c8vhf\" (UID: \"c9305c6b-7c90-4218-aac7-c0b4903e2674\") " pod="kserve/seaweedfs-86cc847c5c-c8vhf" Apr 24 14:30:44.068892 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:44.068904 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bxmpz\" (UniqueName: \"kubernetes.io/projected/38a64272-4634-4aec-94bb-b5f148a82da4-kube-api-access-bxmpz\") pod \"kserve-controller-manager-b7dc77d59-lqnh5\" (UID: \"38a64272-4634-4aec-94bb-b5f148a82da4\") " pod="kserve/kserve-controller-manager-b7dc77d59-lqnh5" Apr 24 14:30:44.069087 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:44.068923 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/38a64272-4634-4aec-94bb-b5f148a82da4-cert\") pod \"kserve-controller-manager-b7dc77d59-lqnh5\" (UID: \"38a64272-4634-4aec-94bb-b5f148a82da4\") " pod="kserve/kserve-controller-manager-b7dc77d59-lqnh5" Apr 24 14:30:44.069087 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:44.068945 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pv8mn\" (UniqueName: \"kubernetes.io/projected/c9305c6b-7c90-4218-aac7-c0b4903e2674-kube-api-access-pv8mn\") pod \"seaweedfs-86cc847c5c-c8vhf\" (UID: \"c9305c6b-7c90-4218-aac7-c0b4903e2674\") " pod="kserve/seaweedfs-86cc847c5c-c8vhf" Apr 24 14:30:44.069087 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:30:44.069029 2570 secret.go:189] Couldn't get secret kserve/kserve-webhook-server-cert: secret "kserve-webhook-server-cert" not found Apr 24 14:30:44.069234 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:30:44.069155 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38a64272-4634-4aec-94bb-b5f148a82da4-cert podName:38a64272-4634-4aec-94bb-b5f148a82da4 nodeName:}" failed. No retries permitted until 2026-04-24 14:30:44.569131294 +0000 UTC m=+422.694535089 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/38a64272-4634-4aec-94bb-b5f148a82da4-cert") pod "kserve-controller-manager-b7dc77d59-lqnh5" (UID: "38a64272-4634-4aec-94bb-b5f148a82da4") : secret "kserve-webhook-server-cert" not found Apr 24 14:30:44.069842 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:44.069826 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/c9305c6b-7c90-4218-aac7-c0b4903e2674-data\") pod \"seaweedfs-86cc847c5c-c8vhf\" (UID: \"c9305c6b-7c90-4218-aac7-c0b4903e2674\") " pod="kserve/seaweedfs-86cc847c5c-c8vhf" Apr 24 14:30:44.079857 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:44.079829 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxmpz\" (UniqueName: \"kubernetes.io/projected/38a64272-4634-4aec-94bb-b5f148a82da4-kube-api-access-bxmpz\") pod \"kserve-controller-manager-b7dc77d59-lqnh5\" (UID: \"38a64272-4634-4aec-94bb-b5f148a82da4\") " pod="kserve/kserve-controller-manager-b7dc77d59-lqnh5" Apr 24 14:30:44.080002 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:44.079981 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv8mn\" (UniqueName: \"kubernetes.io/projected/c9305c6b-7c90-4218-aac7-c0b4903e2674-kube-api-access-pv8mn\") pod \"seaweedfs-86cc847c5c-c8vhf\" (UID: \"c9305c6b-7c90-4218-aac7-c0b4903e2674\") " pod="kserve/seaweedfs-86cc847c5c-c8vhf" Apr 24 14:30:44.242013 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:44.241986 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/seaweedfs-86cc847c5c-c8vhf" Apr 24 14:30:44.356843 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:44.356813 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/seaweedfs-86cc847c5c-c8vhf"] Apr 24 14:30:44.360202 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:30:44.360170 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9305c6b_7c90_4218_aac7_c0b4903e2674.slice/crio-1dc776bf109f4316907fe2c5e197c139da2ed98d82b8a953af8882289c308cba WatchSource:0}: Error finding container 1dc776bf109f4316907fe2c5e197c139da2ed98d82b8a953af8882289c308cba: Status 404 returned error can't find the container with id 1dc776bf109f4316907fe2c5e197c139da2ed98d82b8a953af8882289c308cba Apr 24 14:30:44.573284 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:44.573207 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/38a64272-4634-4aec-94bb-b5f148a82da4-cert\") pod \"kserve-controller-manager-b7dc77d59-lqnh5\" (UID: \"38a64272-4634-4aec-94bb-b5f148a82da4\") " pod="kserve/kserve-controller-manager-b7dc77d59-lqnh5" Apr 24 14:30:44.575527 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:44.575505 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/38a64272-4634-4aec-94bb-b5f148a82da4-cert\") pod \"kserve-controller-manager-b7dc77d59-lqnh5\" (UID: \"38a64272-4634-4aec-94bb-b5f148a82da4\") " pod="kserve/kserve-controller-manager-b7dc77d59-lqnh5" Apr 24 14:30:44.673135 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:44.673076 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/seaweedfs-86cc847c5c-c8vhf" event={"ID":"c9305c6b-7c90-4218-aac7-c0b4903e2674","Type":"ContainerStarted","Data":"1dc776bf109f4316907fe2c5e197c139da2ed98d82b8a953af8882289c308cba"} Apr 24 14:30:44.808342 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:44.808309 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-b7dc77d59-lqnh5" Apr 24 14:30:44.963050 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:44.963020 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-b7dc77d59-lqnh5"] Apr 24 14:30:44.966131 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:30:44.966081 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38a64272_4634_4aec_94bb_b5f148a82da4.slice/crio-9fc1a2544b0303a78113cb6e455aee70645ae016a1d0ac30df5f9c68381d7f99 WatchSource:0}: Error finding container 9fc1a2544b0303a78113cb6e455aee70645ae016a1d0ac30df5f9c68381d7f99: Status 404 returned error can't find the container with id 9fc1a2544b0303a78113cb6e455aee70645ae016a1d0ac30df5f9c68381d7f99 Apr 24 14:30:45.679034 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:45.678997 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-b7dc77d59-lqnh5" event={"ID":"38a64272-4634-4aec-94bb-b5f148a82da4","Type":"ContainerStarted","Data":"9fc1a2544b0303a78113cb6e455aee70645ae016a1d0ac30df5f9c68381d7f99"} Apr 24 14:30:48.688949 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:48.688913 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/seaweedfs-86cc847c5c-c8vhf" event={"ID":"c9305c6b-7c90-4218-aac7-c0b4903e2674","Type":"ContainerStarted","Data":"ac9bdb14f1116dd79dd2d4d498fa56d9bd72c42be11f24d478791f3354bf0294"} Apr 24 14:30:48.689383 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:48.689131 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/seaweedfs-86cc847c5c-c8vhf" Apr 24 14:30:48.690240 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:48.690215 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-b7dc77d59-lqnh5" event={"ID":"38a64272-4634-4aec-94bb-b5f148a82da4","Type":"ContainerStarted","Data":"b317d2069c31460103e83abe0c9b9fe77f7c11517330c262333ac22a63a3a2d7"} Apr 24 14:30:48.690345 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:48.690323 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/kserve-controller-manager-b7dc77d59-lqnh5" Apr 24 14:30:48.706652 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:48.706597 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/seaweedfs-86cc847c5c-c8vhf" podStartSLOduration=1.9364248 podStartE2EDuration="5.706579626s" podCreationTimestamp="2026-04-24 14:30:43 +0000 UTC" firstStartedPulling="2026-04-24 14:30:44.361565487 +0000 UTC m=+422.486969279" lastFinishedPulling="2026-04-24 14:30:48.131720308 +0000 UTC m=+426.257124105" observedRunningTime="2026-04-24 14:30:48.706161884 +0000 UTC m=+426.831565702" watchObservedRunningTime="2026-04-24 14:30:48.706579626 +0000 UTC m=+426.831983442" Apr 24 14:30:48.722733 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:48.722687 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/kserve-controller-manager-b7dc77d59-lqnh5" podStartSLOduration=2.592915477 podStartE2EDuration="5.722672671s" podCreationTimestamp="2026-04-24 14:30:43 +0000 UTC" firstStartedPulling="2026-04-24 14:30:44.9677221 +0000 UTC m=+423.093125896" lastFinishedPulling="2026-04-24 14:30:48.097479298 +0000 UTC m=+426.222883090" observedRunningTime="2026-04-24 14:30:48.721704624 +0000 UTC m=+426.847108477" watchObservedRunningTime="2026-04-24 14:30:48.722672671 +0000 UTC m=+426.848076487" Apr 24 14:30:54.695292 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:30:54.695263 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/seaweedfs-86cc847c5c-c8vhf" Apr 24 14:31:19.116343 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:19.116306 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve/kserve-controller-manager-b7dc77d59-lqnh5"] Apr 24 14:31:19.116729 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:19.116554 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve/kserve-controller-manager-b7dc77d59-lqnh5" podUID="38a64272-4634-4aec-94bb-b5f148a82da4" containerName="manager" containerID="cri-o://b317d2069c31460103e83abe0c9b9fe77f7c11517330c262333ac22a63a3a2d7" gracePeriod=10 Apr 24 14:31:19.121693 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:19.121666 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/kserve-controller-manager-b7dc77d59-lqnh5" Apr 24 14:31:19.345677 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:19.345656 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-b7dc77d59-lqnh5" Apr 24 14:31:19.428952 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:19.428927 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxmpz\" (UniqueName: \"kubernetes.io/projected/38a64272-4634-4aec-94bb-b5f148a82da4-kube-api-access-bxmpz\") pod \"38a64272-4634-4aec-94bb-b5f148a82da4\" (UID: \"38a64272-4634-4aec-94bb-b5f148a82da4\") " Apr 24 14:31:19.429123 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:19.428980 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/38a64272-4634-4aec-94bb-b5f148a82da4-cert\") pod \"38a64272-4634-4aec-94bb-b5f148a82da4\" (UID: \"38a64272-4634-4aec-94bb-b5f148a82da4\") " Apr 24 14:31:19.430993 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:19.430961 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a64272-4634-4aec-94bb-b5f148a82da4-cert" (OuterVolumeSpecName: "cert") pod "38a64272-4634-4aec-94bb-b5f148a82da4" (UID: "38a64272-4634-4aec-94bb-b5f148a82da4"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 14:31:19.431096 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:19.430994 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38a64272-4634-4aec-94bb-b5f148a82da4-kube-api-access-bxmpz" (OuterVolumeSpecName: "kube-api-access-bxmpz") pod "38a64272-4634-4aec-94bb-b5f148a82da4" (UID: "38a64272-4634-4aec-94bb-b5f148a82da4"). InnerVolumeSpecName "kube-api-access-bxmpz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 14:31:19.530045 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:19.530007 2570 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/38a64272-4634-4aec-94bb-b5f148a82da4-cert\") on node \"ip-10-0-129-34.ec2.internal\" DevicePath \"\"" Apr 24 14:31:19.530045 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:19.530041 2570 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bxmpz\" (UniqueName: \"kubernetes.io/projected/38a64272-4634-4aec-94bb-b5f148a82da4-kube-api-access-bxmpz\") on node \"ip-10-0-129-34.ec2.internal\" DevicePath \"\"" Apr 24 14:31:19.777856 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:19.777767 2570 generic.go:358] "Generic (PLEG): container finished" podID="38a64272-4634-4aec-94bb-b5f148a82da4" containerID="b317d2069c31460103e83abe0c9b9fe77f7c11517330c262333ac22a63a3a2d7" exitCode=0 Apr 24 14:31:19.777856 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:19.777841 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-b7dc77d59-lqnh5" Apr 24 14:31:19.778043 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:19.777856 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-b7dc77d59-lqnh5" event={"ID":"38a64272-4634-4aec-94bb-b5f148a82da4","Type":"ContainerDied","Data":"b317d2069c31460103e83abe0c9b9fe77f7c11517330c262333ac22a63a3a2d7"} Apr 24 14:31:19.778043 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:19.777905 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-b7dc77d59-lqnh5" event={"ID":"38a64272-4634-4aec-94bb-b5f148a82da4","Type":"ContainerDied","Data":"9fc1a2544b0303a78113cb6e455aee70645ae016a1d0ac30df5f9c68381d7f99"} Apr 24 14:31:19.778043 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:19.777920 2570 scope.go:117] "RemoveContainer" containerID="b317d2069c31460103e83abe0c9b9fe77f7c11517330c262333ac22a63a3a2d7" Apr 24 14:31:19.785440 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:19.785348 2570 scope.go:117] "RemoveContainer" containerID="b317d2069c31460103e83abe0c9b9fe77f7c11517330c262333ac22a63a3a2d7" Apr 24 14:31:19.785684 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:31:19.785660 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b317d2069c31460103e83abe0c9b9fe77f7c11517330c262333ac22a63a3a2d7\": container with ID starting with b317d2069c31460103e83abe0c9b9fe77f7c11517330c262333ac22a63a3a2d7 not found: ID does not exist" containerID="b317d2069c31460103e83abe0c9b9fe77f7c11517330c262333ac22a63a3a2d7" Apr 24 14:31:19.785790 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:19.785700 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b317d2069c31460103e83abe0c9b9fe77f7c11517330c262333ac22a63a3a2d7"} err="failed to get container status \"b317d2069c31460103e83abe0c9b9fe77f7c11517330c262333ac22a63a3a2d7\": rpc error: code = NotFound desc = could not find container \"b317d2069c31460103e83abe0c9b9fe77f7c11517330c262333ac22a63a3a2d7\": container with ID starting with b317d2069c31460103e83abe0c9b9fe77f7c11517330c262333ac22a63a3a2d7 not found: ID does not exist" Apr 24 14:31:19.798433 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:19.798410 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve/kserve-controller-manager-b7dc77d59-lqnh5"] Apr 24 14:31:19.802164 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:19.802141 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve/kserve-controller-manager-b7dc77d59-lqnh5"] Apr 24 14:31:20.428252 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:20.428222 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38a64272-4634-4aec-94bb-b5f148a82da4" path="/var/lib/kubelet/pods/38a64272-4634-4aec-94bb-b5f148a82da4/volumes" Apr 24 14:31:54.644984 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:54.644946 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/model-serving-api-86f7b4b499-28jn6"] Apr 24 14:31:54.645405 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:54.645207 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="38a64272-4634-4aec-94bb-b5f148a82da4" containerName="manager" Apr 24 14:31:54.645405 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:54.645218 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="38a64272-4634-4aec-94bb-b5f148a82da4" containerName="manager" Apr 24 14:31:54.645405 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:54.645267 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="38a64272-4634-4aec-94bb-b5f148a82da4" containerName="manager" Apr 24 14:31:54.647966 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:54.647947 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/model-serving-api-86f7b4b499-28jn6" Apr 24 14:31:54.651187 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:54.651166 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"model-serving-api-dockercfg-7ql8c\"" Apr 24 14:31:54.651307 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:54.651202 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"model-serving-api-tls\"" Apr 24 14:31:54.657740 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:54.657720 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/model-serving-api-86f7b4b499-28jn6"] Apr 24 14:31:54.682859 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:54.682835 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certs\" (UniqueName: \"kubernetes.io/secret/a0eb2a61-d50e-427c-b5b2-ba208e944e93-tls-certs\") pod \"model-serving-api-86f7b4b499-28jn6\" (UID: \"a0eb2a61-d50e-427c-b5b2-ba208e944e93\") " pod="kserve/model-serving-api-86f7b4b499-28jn6" Apr 24 14:31:54.682962 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:54.682874 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk44k\" (UniqueName: \"kubernetes.io/projected/a0eb2a61-d50e-427c-b5b2-ba208e944e93-kube-api-access-fk44k\") pod \"model-serving-api-86f7b4b499-28jn6\" (UID: \"a0eb2a61-d50e-427c-b5b2-ba208e944e93\") " pod="kserve/model-serving-api-86f7b4b499-28jn6" Apr 24 14:31:54.783210 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:54.783178 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-certs\" (UniqueName: \"kubernetes.io/secret/a0eb2a61-d50e-427c-b5b2-ba208e944e93-tls-certs\") pod \"model-serving-api-86f7b4b499-28jn6\" (UID: \"a0eb2a61-d50e-427c-b5b2-ba208e944e93\") " pod="kserve/model-serving-api-86f7b4b499-28jn6" Apr 24 14:31:54.783330 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:54.783221 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fk44k\" (UniqueName: \"kubernetes.io/projected/a0eb2a61-d50e-427c-b5b2-ba208e944e93-kube-api-access-fk44k\") pod \"model-serving-api-86f7b4b499-28jn6\" (UID: \"a0eb2a61-d50e-427c-b5b2-ba208e944e93\") " pod="kserve/model-serving-api-86f7b4b499-28jn6" Apr 24 14:31:54.783330 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:31:54.783315 2570 secret.go:189] Couldn't get secret kserve/model-serving-api-tls: secret "model-serving-api-tls" not found Apr 24 14:31:54.783403 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:31:54.783395 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0eb2a61-d50e-427c-b5b2-ba208e944e93-tls-certs podName:a0eb2a61-d50e-427c-b5b2-ba208e944e93 nodeName:}" failed. No retries permitted until 2026-04-24 14:31:55.283378635 +0000 UTC m=+493.408782428 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certs" (UniqueName: "kubernetes.io/secret/a0eb2a61-d50e-427c-b5b2-ba208e944e93-tls-certs") pod "model-serving-api-86f7b4b499-28jn6" (UID: "a0eb2a61-d50e-427c-b5b2-ba208e944e93") : secret "model-serving-api-tls" not found Apr 24 14:31:54.791987 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:54.791967 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk44k\" (UniqueName: \"kubernetes.io/projected/a0eb2a61-d50e-427c-b5b2-ba208e944e93-kube-api-access-fk44k\") pod \"model-serving-api-86f7b4b499-28jn6\" (UID: \"a0eb2a61-d50e-427c-b5b2-ba208e944e93\") " pod="kserve/model-serving-api-86f7b4b499-28jn6" Apr 24 14:31:55.287247 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:55.287215 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-certs\" (UniqueName: \"kubernetes.io/secret/a0eb2a61-d50e-427c-b5b2-ba208e944e93-tls-certs\") pod \"model-serving-api-86f7b4b499-28jn6\" (UID: \"a0eb2a61-d50e-427c-b5b2-ba208e944e93\") " pod="kserve/model-serving-api-86f7b4b499-28jn6" Apr 24 14:31:55.289534 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:55.289516 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-certs\" (UniqueName: \"kubernetes.io/secret/a0eb2a61-d50e-427c-b5b2-ba208e944e93-tls-certs\") pod \"model-serving-api-86f7b4b499-28jn6\" (UID: \"a0eb2a61-d50e-427c-b5b2-ba208e944e93\") " pod="kserve/model-serving-api-86f7b4b499-28jn6" Apr 24 14:31:55.558250 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:55.558152 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/model-serving-api-86f7b4b499-28jn6" Apr 24 14:31:55.675471 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:55.675440 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/model-serving-api-86f7b4b499-28jn6"] Apr 24 14:31:55.678768 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:31:55.678717 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0eb2a61_d50e_427c_b5b2_ba208e944e93.slice/crio-09dcf58189266f2f608a994e6d15b40d12780dd94aab6bbf435e9fb226c5db5e WatchSource:0}: Error finding container 09dcf58189266f2f608a994e6d15b40d12780dd94aab6bbf435e9fb226c5db5e: Status 404 returned error can't find the container with id 09dcf58189266f2f608a994e6d15b40d12780dd94aab6bbf435e9fb226c5db5e Apr 24 14:31:55.876706 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:55.876613 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/model-serving-api-86f7b4b499-28jn6" event={"ID":"a0eb2a61-d50e-427c-b5b2-ba208e944e93","Type":"ContainerStarted","Data":"09dcf58189266f2f608a994e6d15b40d12780dd94aab6bbf435e9fb226c5db5e"} Apr 24 14:31:57.882759 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:57.882724 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/model-serving-api-86f7b4b499-28jn6" event={"ID":"a0eb2a61-d50e-427c-b5b2-ba208e944e93","Type":"ContainerStarted","Data":"753a2932242c9e63ab7b20bc55977dbd12b228dde836127ef731dd70729ba7bc"} Apr 24 14:31:57.883263 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:57.882867 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/model-serving-api-86f7b4b499-28jn6" Apr 24 14:31:57.900725 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:31:57.900671 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/model-serving-api-86f7b4b499-28jn6" podStartSLOduration=2.728814511 podStartE2EDuration="3.900653199s" podCreationTimestamp="2026-04-24 14:31:54 +0000 UTC" firstStartedPulling="2026-04-24 14:31:55.680565168 +0000 UTC m=+493.805968961" lastFinishedPulling="2026-04-24 14:31:56.852403842 +0000 UTC m=+494.977807649" observedRunningTime="2026-04-24 14:31:57.899784777 +0000 UTC m=+496.025188593" watchObservedRunningTime="2026-04-24 14:31:57.900653199 +0000 UTC m=+496.026057016" Apr 24 14:32:08.888929 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:08.888896 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/model-serving-api-86f7b4b499-28jn6" Apr 24 14:32:31.860505 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:31.860296 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-f7779-predictor-7699bb9fb4-rxwrz"] Apr 24 14:32:31.866652 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:31.866618 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-f7779-predictor-7699bb9fb4-rxwrz" Apr 24 14:32:31.868766 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:31.868738 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-f7779-predictor-7699bb9fb4-rxwrz"] Apr 24 14:32:31.869431 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:31.869409 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"default-dockercfg-cnrls\"" Apr 24 14:32:31.876636 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:31.876614 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-f7779-predictor-7699bb9fb4-rxwrz" Apr 24 14:32:31.995633 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:31.995433 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-f7779-predictor-7699bb9fb4-rxwrz"] Apr 24 14:32:31.998304 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:32:31.998275 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde1d63c0_f1a2_4f24_8a52_7036a3e9e6b3.slice/crio-962440deaaf5007a631257449e01bd8df2fdf330700328b3ee12ae0e275ab3a1 WatchSource:0}: Error finding container 962440deaaf5007a631257449e01bd8df2fdf330700328b3ee12ae0e275ab3a1: Status 404 returned error can't find the container with id 962440deaaf5007a631257449e01bd8df2fdf330700328b3ee12ae0e275ab3a1 Apr 24 14:32:32.153821 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:32.153741 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-f7779-predictor-647c4cfbf7-2hn4m"] Apr 24 14:32:32.159545 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:32.159526 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-f7779-predictor-647c4cfbf7-2hn4m" Apr 24 14:32:32.162840 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:32.162798 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-f7779-predictor-647c4cfbf7-2hn4m"] Apr 24 14:32:32.169920 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:32.169902 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-f7779-predictor-647c4cfbf7-2hn4m" Apr 24 14:32:32.260585 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:32.260554 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-predictor-784cb989b8-j9rbj"] Apr 24 14:32:32.265432 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:32.265407 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-784cb989b8-j9rbj" Apr 24 14:32:32.274056 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:32.274031 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-predictor-784cb989b8-j9rbj"] Apr 24 14:32:32.293961 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:32.293933 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-f7779-predictor-647c4cfbf7-2hn4m"] Apr 24 14:32:32.297448 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:32:32.297423 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6676863f_a8e7_4f51_8edd_58f6b5481f05.slice/crio-861fca44dedaf4847dffe6f802620546efbab971e84fca368dce1ae048860e5a WatchSource:0}: Error finding container 861fca44dedaf4847dffe6f802620546efbab971e84fca368dce1ae048860e5a: Status 404 returned error can't find the container with id 861fca44dedaf4847dffe6f802620546efbab971e84fca368dce1ae048860e5a Apr 24 14:32:32.364664 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:32.364627 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/f65fd936-4a65-48ec-b909-5d3b58fc41f5-kserve-provision-location\") pod \"isvc-xgboost-graph-predictor-784cb989b8-j9rbj\" (UID: \"f65fd936-4a65-48ec-b909-5d3b58fc41f5\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-784cb989b8-j9rbj" Apr 24 14:32:32.447998 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:32.447965 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-54dccdcc4-n2tq7"] Apr 24 14:32:32.452061 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:32.452038 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-54dccdcc4-n2tq7" Apr 24 14:32:32.459060 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:32.458912 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-54dccdcc4-n2tq7"] Apr 24 14:32:32.465677 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:32.465649 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/f65fd936-4a65-48ec-b909-5d3b58fc41f5-kserve-provision-location\") pod \"isvc-xgboost-graph-predictor-784cb989b8-j9rbj\" (UID: \"f65fd936-4a65-48ec-b909-5d3b58fc41f5\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-784cb989b8-j9rbj" Apr 24 14:32:32.465994 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:32.465975 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/f65fd936-4a65-48ec-b909-5d3b58fc41f5-kserve-provision-location\") pod \"isvc-xgboost-graph-predictor-784cb989b8-j9rbj\" (UID: \"f65fd936-4a65-48ec-b909-5d3b58fc41f5\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-784cb989b8-j9rbj" Apr 24 14:32:32.566784 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:32.566747 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/c9618735-5632-4e5e-8623-0a3cfc9508f8-kserve-provision-location\") pod \"isvc-sklearn-graph-2-predictor-54dccdcc4-n2tq7\" (UID: \"c9618735-5632-4e5e-8623-0a3cfc9508f8\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-54dccdcc4-n2tq7" Apr 24 14:32:32.578293 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:32.578267 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-784cb989b8-j9rbj" Apr 24 14:32:32.667774 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:32.667722 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/c9618735-5632-4e5e-8623-0a3cfc9508f8-kserve-provision-location\") pod \"isvc-sklearn-graph-2-predictor-54dccdcc4-n2tq7\" (UID: \"c9618735-5632-4e5e-8623-0a3cfc9508f8\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-54dccdcc4-n2tq7" Apr 24 14:32:32.668180 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:32.668158 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/c9618735-5632-4e5e-8623-0a3cfc9508f8-kserve-provision-location\") pod \"isvc-sklearn-graph-2-predictor-54dccdcc4-n2tq7\" (UID: \"c9618735-5632-4e5e-8623-0a3cfc9508f8\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-54dccdcc4-n2tq7" Apr 24 14:32:32.693657 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:32.693577 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-predictor-784cb989b8-j9rbj"] Apr 24 14:32:32.697236 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:32:32.697202 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf65fd936_4a65_48ec_b909_5d3b58fc41f5.slice/crio-e94974ae010a9ab7f0b340596eedf68f404268ff5c625d829d6cf650e86c510a WatchSource:0}: Error finding container e94974ae010a9ab7f0b340596eedf68f404268ff5c625d829d6cf650e86c510a: Status 404 returned error can't find the container with id e94974ae010a9ab7f0b340596eedf68f404268ff5c625d829d6cf650e86c510a Apr 24 14:32:32.762812 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:32.762775 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-54dccdcc4-n2tq7" Apr 24 14:32:32.933700 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:32.933651 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-54dccdcc4-n2tq7"] Apr 24 14:32:32.948473 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:32:32.948438 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9618735_5632_4e5e_8623_0a3cfc9508f8.slice/crio-845b8e5603e6afa079c618829e89b09b68181b99db397f0eedb9bfad387a045c WatchSource:0}: Error finding container 845b8e5603e6afa079c618829e89b09b68181b99db397f0eedb9bfad387a045c: Status 404 returned error can't find the container with id 845b8e5603e6afa079c618829e89b09b68181b99db397f0eedb9bfad387a045c Apr 24 14:32:32.980195 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:32.980162 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-f7779-predictor-7699bb9fb4-rxwrz" event={"ID":"de1d63c0-f1a2-4f24-8a52-7036a3e9e6b3","Type":"ContainerStarted","Data":"962440deaaf5007a631257449e01bd8df2fdf330700328b3ee12ae0e275ab3a1"} Apr 24 14:32:32.988501 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:32.988465 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-54dccdcc4-n2tq7" event={"ID":"c9618735-5632-4e5e-8623-0a3cfc9508f8","Type":"ContainerStarted","Data":"845b8e5603e6afa079c618829e89b09b68181b99db397f0eedb9bfad387a045c"} Apr 24 14:32:32.990740 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:32.990710 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-f7779-predictor-647c4cfbf7-2hn4m" event={"ID":"6676863f-a8e7-4f51-8edd-58f6b5481f05","Type":"ContainerStarted","Data":"861fca44dedaf4847dffe6f802620546efbab971e84fca368dce1ae048860e5a"} Apr 24 14:32:32.992520 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:32.992495 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-784cb989b8-j9rbj" event={"ID":"f65fd936-4a65-48ec-b909-5d3b58fc41f5","Type":"ContainerStarted","Data":"e94974ae010a9ab7f0b340596eedf68f404268ff5c625d829d6cf650e86c510a"} Apr 24 14:32:50.064786 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:50.064743 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-54dccdcc4-n2tq7" event={"ID":"c9618735-5632-4e5e-8623-0a3cfc9508f8","Type":"ContainerStarted","Data":"249d1122a833f9ab544d6ca82d6ba6eeba2928cafe0d6150f5ab3f09eadc661a"} Apr 24 14:32:50.066153 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:50.066127 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-f7779-predictor-647c4cfbf7-2hn4m" event={"ID":"6676863f-a8e7-4f51-8edd-58f6b5481f05","Type":"ContainerStarted","Data":"440bd041ecb37252da79a94f4dc64c39ca7ace0d285ccd4e5d0de545bc84b73d"} Apr 24 14:32:50.066396 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:50.066380 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-f7779-predictor-647c4cfbf7-2hn4m" Apr 24 14:32:50.067396 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:50.067366 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-784cb989b8-j9rbj" event={"ID":"f65fd936-4a65-48ec-b909-5d3b58fc41f5","Type":"ContainerStarted","Data":"774cd204b7dd845e87c57816a23a4fa4e9d657b532cdcfae73acfefc8bc32c87"} Apr 24 14:32:50.067938 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:50.067913 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-f7779-predictor-647c4cfbf7-2hn4m" podUID="6676863f-a8e7-4f51-8edd-58f6b5481f05" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.19:8080: connect: connection refused" Apr 24 14:32:50.068565 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:50.068540 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-f7779-predictor-7699bb9fb4-rxwrz" event={"ID":"de1d63c0-f1a2-4f24-8a52-7036a3e9e6b3","Type":"ContainerStarted","Data":"fb40aaaf32c6e139f01513d5ab7df1ab023640d1a868b09073ef369cc7c9f100"} Apr 24 14:32:50.068791 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:50.068774 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-f7779-predictor-7699bb9fb4-rxwrz" Apr 24 14:32:50.069670 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:50.069650 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-f7779-predictor-7699bb9fb4-rxwrz" podUID="de1d63c0-f1a2-4f24-8a52-7036a3e9e6b3" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.18:8080: connect: connection refused" Apr 24 14:32:50.109294 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:50.109239 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-f7779-predictor-7699bb9fb4-rxwrz" podStartSLOduration=2.025983532 podStartE2EDuration="19.109225291s" podCreationTimestamp="2026-04-24 14:32:31 +0000 UTC" firstStartedPulling="2026-04-24 14:32:32.000041248 +0000 UTC m=+530.125445054" lastFinishedPulling="2026-04-24 14:32:49.083283019 +0000 UTC m=+547.208686813" observedRunningTime="2026-04-24 14:32:50.108386783 +0000 UTC m=+548.233790596" watchObservedRunningTime="2026-04-24 14:32:50.109225291 +0000 UTC m=+548.234629105" Apr 24 14:32:50.124042 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:50.123998 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/error-404-isvc-f7779-predictor-647c4cfbf7-2hn4m" podStartSLOduration=1.3294898499999999 podStartE2EDuration="18.123983008s" podCreationTimestamp="2026-04-24 14:32:32 +0000 UTC" firstStartedPulling="2026-04-24 14:32:32.299342018 +0000 UTC m=+530.424745814" lastFinishedPulling="2026-04-24 14:32:49.093835167 +0000 UTC m=+547.219238972" observedRunningTime="2026-04-24 14:32:50.122992646 +0000 UTC m=+548.248396463" watchObservedRunningTime="2026-04-24 14:32:50.123983008 +0000 UTC m=+548.249386822" Apr 24 14:32:51.072051 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:51.072013 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-f7779-predictor-7699bb9fb4-rxwrz" podUID="de1d63c0-f1a2-4f24-8a52-7036a3e9e6b3" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.18:8080: connect: connection refused" Apr 24 14:32:51.072507 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:51.072013 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-f7779-predictor-647c4cfbf7-2hn4m" podUID="6676863f-a8e7-4f51-8edd-58f6b5481f05" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.19:8080: connect: connection refused" Apr 24 14:32:53.078629 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:53.078596 2570 generic.go:358] "Generic (PLEG): container finished" podID="f65fd936-4a65-48ec-b909-5d3b58fc41f5" containerID="774cd204b7dd845e87c57816a23a4fa4e9d657b532cdcfae73acfefc8bc32c87" exitCode=0 Apr 24 14:32:53.079027 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:53.078671 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-784cb989b8-j9rbj" event={"ID":"f65fd936-4a65-48ec-b909-5d3b58fc41f5","Type":"ContainerDied","Data":"774cd204b7dd845e87c57816a23a4fa4e9d657b532cdcfae73acfefc8bc32c87"} Apr 24 14:32:53.083126 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:53.080334 2570 generic.go:358] "Generic (PLEG): container finished" podID="c9618735-5632-4e5e-8623-0a3cfc9508f8" containerID="249d1122a833f9ab544d6ca82d6ba6eeba2928cafe0d6150f5ab3f09eadc661a" exitCode=0 Apr 24 14:32:53.083126 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:32:53.080378 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-54dccdcc4-n2tq7" event={"ID":"c9618735-5632-4e5e-8623-0a3cfc9508f8","Type":"ContainerDied","Data":"249d1122a833f9ab544d6ca82d6ba6eeba2928cafe0d6150f5ab3f09eadc661a"} Apr 24 14:33:00.105999 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:33:00.105965 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-54dccdcc4-n2tq7" event={"ID":"c9618735-5632-4e5e-8623-0a3cfc9508f8","Type":"ContainerStarted","Data":"09c00942d4d576d11362320faac5ea320f8f72fafdbaf9402769ef16949b3cdf"} Apr 24 14:33:00.106442 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:33:00.106262 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-54dccdcc4-n2tq7" Apr 24 14:33:00.107622 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:33:00.107595 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-54dccdcc4-n2tq7" podUID="c9618735-5632-4e5e-8623-0a3cfc9508f8" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.21:8080: connect: connection refused" Apr 24 14:33:00.121944 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:33:00.121877 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-54dccdcc4-n2tq7" podStartSLOduration=1.50312279 podStartE2EDuration="28.121865199s" podCreationTimestamp="2026-04-24 14:32:32 +0000 UTC" firstStartedPulling="2026-04-24 14:32:32.96117544 +0000 UTC m=+531.086579247" lastFinishedPulling="2026-04-24 14:32:59.579917851 +0000 UTC m=+557.705321656" observedRunningTime="2026-04-24 14:33:00.121551608 +0000 UTC m=+558.246955422" watchObservedRunningTime="2026-04-24 14:33:00.121865199 +0000 UTC m=+558.247268995" Apr 24 14:33:01.072628 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:33:01.072587 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-f7779-predictor-7699bb9fb4-rxwrz" podUID="de1d63c0-f1a2-4f24-8a52-7036a3e9e6b3" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.18:8080: connect: connection refused" Apr 24 14:33:01.072819 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:33:01.072592 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-f7779-predictor-647c4cfbf7-2hn4m" podUID="6676863f-a8e7-4f51-8edd-58f6b5481f05" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.19:8080: connect: connection refused" Apr 24 14:33:01.114097 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:33:01.114055 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-54dccdcc4-n2tq7" podUID="c9618735-5632-4e5e-8623-0a3cfc9508f8" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.21:8080: connect: connection refused" Apr 24 14:33:11.072852 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:33:11.072799 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-f7779-predictor-7699bb9fb4-rxwrz" podUID="de1d63c0-f1a2-4f24-8a52-7036a3e9e6b3" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.18:8080: connect: connection refused" Apr 24 14:33:11.073428 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:33:11.072799 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-f7779-predictor-647c4cfbf7-2hn4m" podUID="6676863f-a8e7-4f51-8edd-58f6b5481f05" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.19:8080: connect: connection refused" Apr 24 14:33:11.114287 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:33:11.114242 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-54dccdcc4-n2tq7" podUID="c9618735-5632-4e5e-8623-0a3cfc9508f8" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.21:8080: connect: connection refused" Apr 24 14:33:15.157659 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:33:15.157622 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-784cb989b8-j9rbj" event={"ID":"f65fd936-4a65-48ec-b909-5d3b58fc41f5","Type":"ContainerStarted","Data":"7a608afcaa710e5cb4f75423b94bfaa976c349fcf01bf8106da0591f84e988fc"} Apr 24 14:33:15.158057 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:33:15.157917 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-784cb989b8-j9rbj" Apr 24 14:33:15.159226 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:33:15.159202 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-784cb989b8-j9rbj" podUID="f65fd936-4a65-48ec-b909-5d3b58fc41f5" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.20:8080: connect: connection refused" Apr 24 14:33:15.176028 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:33:15.175977 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-784cb989b8-j9rbj" podStartSLOduration=1.741827558 podStartE2EDuration="43.175964133s" podCreationTimestamp="2026-04-24 14:32:32 +0000 UTC" firstStartedPulling="2026-04-24 14:32:32.699483675 +0000 UTC m=+530.824887468" lastFinishedPulling="2026-04-24 14:33:14.133620237 +0000 UTC m=+572.259024043" observedRunningTime="2026-04-24 14:33:15.173977573 +0000 UTC m=+573.299381388" watchObservedRunningTime="2026-04-24 14:33:15.175964133 +0000 UTC m=+573.301367948" Apr 24 14:33:16.161213 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:33:16.161171 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-784cb989b8-j9rbj" podUID="f65fd936-4a65-48ec-b909-5d3b58fc41f5" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.20:8080: connect: connection refused" Apr 24 14:33:21.072145 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:33:21.072075 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-f7779-predictor-7699bb9fb4-rxwrz" podUID="de1d63c0-f1a2-4f24-8a52-7036a3e9e6b3" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.18:8080: connect: connection refused" Apr 24 14:33:21.072523 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:33:21.072080 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-f7779-predictor-647c4cfbf7-2hn4m" podUID="6676863f-a8e7-4f51-8edd-58f6b5481f05" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.19:8080: connect: connection refused" Apr 24 14:33:21.114205 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:33:21.114164 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-54dccdcc4-n2tq7" podUID="c9618735-5632-4e5e-8623-0a3cfc9508f8" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.21:8080: connect: connection refused" Apr 24 14:33:26.162035 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:33:26.161990 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-784cb989b8-j9rbj" podUID="f65fd936-4a65-48ec-b909-5d3b58fc41f5" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.20:8080: connect: connection refused" Apr 24 14:33:31.072785 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:33:31.072732 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-f7779-predictor-7699bb9fb4-rxwrz" podUID="de1d63c0-f1a2-4f24-8a52-7036a3e9e6b3" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.18:8080: connect: connection refused" Apr 24 14:33:31.073212 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:33:31.072732 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-f7779-predictor-647c4cfbf7-2hn4m" podUID="6676863f-a8e7-4f51-8edd-58f6b5481f05" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.19:8080: connect: connection refused" Apr 24 14:33:31.114907 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:33:31.114859 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-54dccdcc4-n2tq7" podUID="c9618735-5632-4e5e-8623-0a3cfc9508f8" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.21:8080: connect: connection refused" Apr 24 14:33:36.161525 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:33:36.161469 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-784cb989b8-j9rbj" podUID="f65fd936-4a65-48ec-b909-5d3b58fc41f5" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.20:8080: connect: connection refused" Apr 24 14:33:41.072663 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:33:41.072615 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-f7779-predictor-647c4cfbf7-2hn4m" podUID="6676863f-a8e7-4f51-8edd-58f6b5481f05" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.19:8080: connect: connection refused" Apr 24 14:33:41.073040 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:33:41.072615 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-f7779-predictor-7699bb9fb4-rxwrz" podUID="de1d63c0-f1a2-4f24-8a52-7036a3e9e6b3" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.18:8080: connect: connection refused" Apr 24 14:33:41.114832 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:33:41.114786 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-54dccdcc4-n2tq7" podUID="c9618735-5632-4e5e-8623-0a3cfc9508f8" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.21:8080: connect: connection refused" Apr 24 14:33:42.359609 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:33:42.359575 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dz989_090e3afb-c111-4bf0-a107-0156c2f3a0f2/ovn-acl-logging/0.log" Apr 24 14:33:42.360394 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:33:42.360373 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dz989_090e3afb-c111-4bf0-a107-0156c2f3a0f2/ovn-acl-logging/0.log" Apr 24 14:33:46.162189 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:33:46.162138 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-784cb989b8-j9rbj" podUID="f65fd936-4a65-48ec-b909-5d3b58fc41f5" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.20:8080: connect: connection refused" Apr 24 14:33:51.074029 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:33:51.073996 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-f7779-predictor-7699bb9fb4-rxwrz" Apr 24 14:33:51.074415 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:33:51.074047 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-f7779-predictor-647c4cfbf7-2hn4m" Apr 24 14:33:51.114507 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:33:51.114458 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-54dccdcc4-n2tq7" podUID="c9618735-5632-4e5e-8623-0a3cfc9508f8" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.21:8080: connect: connection refused" Apr 24 14:33:56.161975 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:33:56.161878 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-784cb989b8-j9rbj" podUID="f65fd936-4a65-48ec-b909-5d3b58fc41f5" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.20:8080: connect: connection refused" Apr 24 14:34:01.115709 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:01.115669 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-54dccdcc4-n2tq7" podUID="c9618735-5632-4e5e-8623-0a3cfc9508f8" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.21:8080: connect: connection refused" Apr 24 14:34:06.161806 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:06.161762 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-784cb989b8-j9rbj" podUID="f65fd936-4a65-48ec-b909-5d3b58fc41f5" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.20:8080: connect: connection refused" Apr 24 14:34:11.115319 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:11.115283 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-54dccdcc4-n2tq7" Apr 24 14:34:12.164571 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:12.164537 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-f7779-predictor-7699bb9fb4-rxwrz"] Apr 24 14:34:12.164974 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:12.164847 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-f7779-predictor-7699bb9fb4-rxwrz" podUID="de1d63c0-f1a2-4f24-8a52-7036a3e9e6b3" containerName="kserve-container" containerID="cri-o://fb40aaaf32c6e139f01513d5ab7df1ab023640d1a868b09073ef369cc7c9f100" gracePeriod=30 Apr 24 14:34:12.217421 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:12.217384 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-f7779-predictor-647c4cfbf7-2hn4m"] Apr 24 14:34:12.217730 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:12.217685 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-f7779-predictor-647c4cfbf7-2hn4m" podUID="6676863f-a8e7-4f51-8edd-58f6b5481f05" containerName="kserve-container" containerID="cri-o://440bd041ecb37252da79a94f4dc64c39ca7ace0d285ccd4e5d0de545bc84b73d" gracePeriod=30 Apr 24 14:34:12.263801 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:12.263771 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-96b39-predictor-c56f676c4-v9szb"] Apr 24 14:34:12.266595 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:12.266580 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-96b39-predictor-c56f676c4-v9szb" Apr 24 14:34:12.274710 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:12.274689 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-96b39-predictor-c56f676c4-v9szb"] Apr 24 14:34:12.276770 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:12.276751 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-96b39-predictor-c56f676c4-v9szb" Apr 24 14:34:12.321134 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:12.320677 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-96b39-predictor-f7d9bd579-jz7pz"] Apr 24 14:34:12.324272 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:12.324215 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-96b39-predictor-f7d9bd579-jz7pz" Apr 24 14:34:12.333015 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:12.332986 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-96b39-predictor-f7d9bd579-jz7pz"] Apr 24 14:34:12.340520 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:12.340126 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-96b39-predictor-f7d9bd579-jz7pz" Apr 24 14:34:12.417207 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:12.417050 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-96b39-predictor-c56f676c4-v9szb"] Apr 24 14:34:12.420472 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:34:12.420444 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a931de9_a88f_406c_b48d_f196a475fd94.slice/crio-fec9dcd1e4a992783b2320977dff4c82bce1f334b5d6f82bd529f5d0b75a6234 WatchSource:0}: Error finding container fec9dcd1e4a992783b2320977dff4c82bce1f334b5d6f82bd529f5d0b75a6234: Status 404 returned error can't find the container with id fec9dcd1e4a992783b2320977dff4c82bce1f334b5d6f82bd529f5d0b75a6234 Apr 24 14:34:12.480215 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:12.480188 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-96b39-predictor-f7d9bd579-jz7pz"] Apr 24 14:34:12.484035 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:34:12.484004 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6479872c_3320_4186_bddd_beb84d015947.slice/crio-4d5aa9263f61e42a2d4dc7822d5b6dc2c5d1fc5c0aa696f71fb7b7248a5a5eaa WatchSource:0}: Error finding container 4d5aa9263f61e42a2d4dc7822d5b6dc2c5d1fc5c0aa696f71fb7b7248a5a5eaa: Status 404 returned error can't find the container with id 4d5aa9263f61e42a2d4dc7822d5b6dc2c5d1fc5c0aa696f71fb7b7248a5a5eaa Apr 24 14:34:13.325709 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:13.325662 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-96b39-predictor-f7d9bd579-jz7pz" event={"ID":"6479872c-3320-4186-bddd-beb84d015947","Type":"ContainerStarted","Data":"6d9b061eaa7a3ec7f7fd916e8fd26491ece79057bf239588a980e5392a66d114"} Apr 24 14:34:13.325709 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:13.325713 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-96b39-predictor-f7d9bd579-jz7pz" event={"ID":"6479872c-3320-4186-bddd-beb84d015947","Type":"ContainerStarted","Data":"4d5aa9263f61e42a2d4dc7822d5b6dc2c5d1fc5c0aa696f71fb7b7248a5a5eaa"} Apr 24 14:34:13.326254 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:13.325867 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-96b39-predictor-f7d9bd579-jz7pz" Apr 24 14:34:13.327031 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:13.327003 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-96b39-predictor-c56f676c4-v9szb" event={"ID":"8a931de9-a88f-406c-b48d-f196a475fd94","Type":"ContainerStarted","Data":"d4555378d73b800311049ff81cd3c7d6817cd20781bec554850e3db05d76464e"} Apr 24 14:34:13.327031 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:13.327031 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-96b39-predictor-c56f676c4-v9szb" event={"ID":"8a931de9-a88f-406c-b48d-f196a475fd94","Type":"ContainerStarted","Data":"fec9dcd1e4a992783b2320977dff4c82bce1f334b5d6f82bd529f5d0b75a6234"} Apr 24 14:34:13.327246 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:13.327198 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-96b39-predictor-c56f676c4-v9szb" Apr 24 14:34:13.327314 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:13.327283 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-96b39-predictor-f7d9bd579-jz7pz" podUID="6479872c-3320-4186-bddd-beb84d015947" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.23:8080: connect: connection refused" Apr 24 14:34:13.327979 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:13.327961 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-96b39-predictor-c56f676c4-v9szb" podUID="8a931de9-a88f-406c-b48d-f196a475fd94" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.22:8080: connect: connection refused" Apr 24 14:34:13.343203 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:13.343160 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/error-404-isvc-96b39-predictor-f7d9bd579-jz7pz" podStartSLOduration=1.343148662 podStartE2EDuration="1.343148662s" podCreationTimestamp="2026-04-24 14:34:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 14:34:13.341654925 +0000 UTC m=+631.467058741" watchObservedRunningTime="2026-04-24 14:34:13.343148662 +0000 UTC m=+631.468552477" Apr 24 14:34:13.356280 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:13.356242 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-96b39-predictor-c56f676c4-v9szb" podStartSLOduration=1.356232479 podStartE2EDuration="1.356232479s" podCreationTimestamp="2026-04-24 14:34:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 14:34:13.355064159 +0000 UTC m=+631.480467974" watchObservedRunningTime="2026-04-24 14:34:13.356232479 +0000 UTC m=+631.481636290" Apr 24 14:34:14.329749 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:14.329702 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-96b39-predictor-c56f676c4-v9szb" podUID="8a931de9-a88f-406c-b48d-f196a475fd94" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.22:8080: connect: connection refused" Apr 24 14:34:14.330223 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:14.329703 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-96b39-predictor-f7d9bd579-jz7pz" podUID="6479872c-3320-4186-bddd-beb84d015947" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.23:8080: connect: connection refused" Apr 24 14:34:15.258482 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:15.258461 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-f7779-predictor-647c4cfbf7-2hn4m" Apr 24 14:34:15.332930 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:15.332897 2570 generic.go:358] "Generic (PLEG): container finished" podID="6676863f-a8e7-4f51-8edd-58f6b5481f05" containerID="440bd041ecb37252da79a94f4dc64c39ca7ace0d285ccd4e5d0de545bc84b73d" exitCode=0 Apr 24 14:34:15.333304 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:15.332962 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-f7779-predictor-647c4cfbf7-2hn4m" Apr 24 14:34:15.333304 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:15.332980 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-f7779-predictor-647c4cfbf7-2hn4m" event={"ID":"6676863f-a8e7-4f51-8edd-58f6b5481f05","Type":"ContainerDied","Data":"440bd041ecb37252da79a94f4dc64c39ca7ace0d285ccd4e5d0de545bc84b73d"} Apr 24 14:34:15.333304 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:15.333014 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-f7779-predictor-647c4cfbf7-2hn4m" event={"ID":"6676863f-a8e7-4f51-8edd-58f6b5481f05","Type":"ContainerDied","Data":"861fca44dedaf4847dffe6f802620546efbab971e84fca368dce1ae048860e5a"} Apr 24 14:34:15.333304 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:15.333029 2570 scope.go:117] "RemoveContainer" containerID="440bd041ecb37252da79a94f4dc64c39ca7ace0d285ccd4e5d0de545bc84b73d" Apr 24 14:34:15.340845 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:15.340829 2570 scope.go:117] "RemoveContainer" containerID="440bd041ecb37252da79a94f4dc64c39ca7ace0d285ccd4e5d0de545bc84b73d" Apr 24 14:34:15.341140 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:34:15.341095 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"440bd041ecb37252da79a94f4dc64c39ca7ace0d285ccd4e5d0de545bc84b73d\": container with ID starting with 440bd041ecb37252da79a94f4dc64c39ca7ace0d285ccd4e5d0de545bc84b73d not found: ID does not exist" containerID="440bd041ecb37252da79a94f4dc64c39ca7ace0d285ccd4e5d0de545bc84b73d" Apr 24 14:34:15.341211 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:15.341152 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"440bd041ecb37252da79a94f4dc64c39ca7ace0d285ccd4e5d0de545bc84b73d"} err="failed to get container status \"440bd041ecb37252da79a94f4dc64c39ca7ace0d285ccd4e5d0de545bc84b73d\": rpc error: code = NotFound desc = could not find container \"440bd041ecb37252da79a94f4dc64c39ca7ace0d285ccd4e5d0de545bc84b73d\": container with ID starting with 440bd041ecb37252da79a94f4dc64c39ca7ace0d285ccd4e5d0de545bc84b73d not found: ID does not exist" Apr 24 14:34:15.355661 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:15.355614 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-f7779-predictor-647c4cfbf7-2hn4m"] Apr 24 14:34:15.359400 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:15.359383 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-f7779-predictor-647c4cfbf7-2hn4m"] Apr 24 14:34:15.597037 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:15.597012 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-f7779-predictor-7699bb9fb4-rxwrz" Apr 24 14:34:16.161998 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:16.161956 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-784cb989b8-j9rbj" podUID="f65fd936-4a65-48ec-b909-5d3b58fc41f5" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.20:8080: connect: connection refused" Apr 24 14:34:16.337932 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:16.337898 2570 generic.go:358] "Generic (PLEG): container finished" podID="de1d63c0-f1a2-4f24-8a52-7036a3e9e6b3" containerID="fb40aaaf32c6e139f01513d5ab7df1ab023640d1a868b09073ef369cc7c9f100" exitCode=0 Apr 24 14:34:16.338341 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:16.337938 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-f7779-predictor-7699bb9fb4-rxwrz" event={"ID":"de1d63c0-f1a2-4f24-8a52-7036a3e9e6b3","Type":"ContainerDied","Data":"fb40aaaf32c6e139f01513d5ab7df1ab023640d1a868b09073ef369cc7c9f100"} Apr 24 14:34:16.338341 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:16.337958 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-f7779-predictor-7699bb9fb4-rxwrz" event={"ID":"de1d63c0-f1a2-4f24-8a52-7036a3e9e6b3","Type":"ContainerDied","Data":"962440deaaf5007a631257449e01bd8df2fdf330700328b3ee12ae0e275ab3a1"} Apr 24 14:34:16.338341 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:16.337960 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-f7779-predictor-7699bb9fb4-rxwrz" Apr 24 14:34:16.338341 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:16.337982 2570 scope.go:117] "RemoveContainer" containerID="fb40aaaf32c6e139f01513d5ab7df1ab023640d1a868b09073ef369cc7c9f100" Apr 24 14:34:16.346301 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:16.346285 2570 scope.go:117] "RemoveContainer" containerID="fb40aaaf32c6e139f01513d5ab7df1ab023640d1a868b09073ef369cc7c9f100" Apr 24 14:34:16.346529 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:34:16.346510 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb40aaaf32c6e139f01513d5ab7df1ab023640d1a868b09073ef369cc7c9f100\": container with ID starting with fb40aaaf32c6e139f01513d5ab7df1ab023640d1a868b09073ef369cc7c9f100 not found: ID does not exist" containerID="fb40aaaf32c6e139f01513d5ab7df1ab023640d1a868b09073ef369cc7c9f100" Apr 24 14:34:16.346562 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:16.346537 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb40aaaf32c6e139f01513d5ab7df1ab023640d1a868b09073ef369cc7c9f100"} err="failed to get container status \"fb40aaaf32c6e139f01513d5ab7df1ab023640d1a868b09073ef369cc7c9f100\": rpc error: code = NotFound desc = could not find container \"fb40aaaf32c6e139f01513d5ab7df1ab023640d1a868b09073ef369cc7c9f100\": container with ID starting with fb40aaaf32c6e139f01513d5ab7df1ab023640d1a868b09073ef369cc7c9f100 not found: ID does not exist" Apr 24 14:34:16.358795 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:16.358769 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-f7779-predictor-7699bb9fb4-rxwrz"] Apr 24 14:34:16.361667 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:16.361647 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-f7779-predictor-7699bb9fb4-rxwrz"] Apr 24 14:34:16.428648 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:16.428616 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6676863f-a8e7-4f51-8edd-58f6b5481f05" path="/var/lib/kubelet/pods/6676863f-a8e7-4f51-8edd-58f6b5481f05/volumes" Apr 24 14:34:16.428861 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:16.428848 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de1d63c0-f1a2-4f24-8a52-7036a3e9e6b3" path="/var/lib/kubelet/pods/de1d63c0-f1a2-4f24-8a52-7036a3e9e6b3/volumes" Apr 24 14:34:24.330016 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:24.329975 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-96b39-predictor-f7d9bd579-jz7pz" podUID="6479872c-3320-4186-bddd-beb84d015947" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.23:8080: connect: connection refused" Apr 24 14:34:24.330538 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:24.329975 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-96b39-predictor-c56f676c4-v9szb" podUID="8a931de9-a88f-406c-b48d-f196a475fd94" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.22:8080: connect: connection refused" Apr 24 14:34:26.162157 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:26.162118 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-784cb989b8-j9rbj" Apr 24 14:34:34.329961 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:34.329915 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-96b39-predictor-c56f676c4-v9szb" podUID="8a931de9-a88f-406c-b48d-f196a475fd94" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.22:8080: connect: connection refused" Apr 24 14:34:34.330440 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:34.329916 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-96b39-predictor-f7d9bd579-jz7pz" podUID="6479872c-3320-4186-bddd-beb84d015947" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.23:8080: connect: connection refused" Apr 24 14:34:44.330414 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:44.330366 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-96b39-predictor-c56f676c4-v9szb" podUID="8a931de9-a88f-406c-b48d-f196a475fd94" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.22:8080: connect: connection refused" Apr 24 14:34:44.330869 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:44.330370 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-96b39-predictor-f7d9bd579-jz7pz" podUID="6479872c-3320-4186-bddd-beb84d015947" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.23:8080: connect: connection refused" Apr 24 14:34:52.127638 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:52.127598 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-54dccdcc4-n2tq7"] Apr 24 14:34:52.128206 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:52.127975 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-54dccdcc4-n2tq7" podUID="c9618735-5632-4e5e-8623-0a3cfc9508f8" containerName="kserve-container" containerID="cri-o://09c00942d4d576d11362320faac5ea320f8f72fafdbaf9402769ef16949b3cdf" gracePeriod=30 Apr 24 14:34:52.158304 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:52.158272 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-71827-predictor-6cd7d94bb6-bvhmq"] Apr 24 14:34:52.158617 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:52.158601 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6676863f-a8e7-4f51-8edd-58f6b5481f05" containerName="kserve-container" Apr 24 14:34:52.158698 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:52.158620 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="6676863f-a8e7-4f51-8edd-58f6b5481f05" containerName="kserve-container" Apr 24 14:34:52.158698 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:52.158657 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="de1d63c0-f1a2-4f24-8a52-7036a3e9e6b3" containerName="kserve-container" Apr 24 14:34:52.158698 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:52.158666 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1d63c0-f1a2-4f24-8a52-7036a3e9e6b3" containerName="kserve-container" Apr 24 14:34:52.158843 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:52.158735 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="de1d63c0-f1a2-4f24-8a52-7036a3e9e6b3" containerName="kserve-container" Apr 24 14:34:52.158843 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:52.158748 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="6676863f-a8e7-4f51-8edd-58f6b5481f05" containerName="kserve-container" Apr 24 14:34:52.163401 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:52.163373 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-71827-predictor-6cd7d94bb6-bvhmq" Apr 24 14:34:52.171531 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:52.171476 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-71827-predictor-6cd7d94bb6-bvhmq"] Apr 24 14:34:52.175504 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:52.175484 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-71827-predictor-6cd7d94bb6-bvhmq" Apr 24 14:34:52.241577 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:52.241543 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-71827-predictor-5df7cd677f-v5rws"] Apr 24 14:34:52.246334 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:52.245750 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-71827-predictor-5df7cd677f-v5rws" Apr 24 14:34:52.255521 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:52.255008 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-71827-predictor-5df7cd677f-v5rws"] Apr 24 14:34:52.259450 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:52.259242 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-predictor-784cb989b8-j9rbj"] Apr 24 14:34:52.260091 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:52.260044 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-784cb989b8-j9rbj" podUID="f65fd936-4a65-48ec-b909-5d3b58fc41f5" containerName="kserve-container" containerID="cri-o://7a608afcaa710e5cb4f75423b94bfaa976c349fcf01bf8106da0591f84e988fc" gracePeriod=30 Apr 24 14:34:52.264657 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:52.264309 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-71827-predictor-5df7cd677f-v5rws" Apr 24 14:34:52.320226 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:52.320174 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-71827-predictor-6cd7d94bb6-bvhmq"] Apr 24 14:34:52.323539 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:34:52.323502 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f60c23a_86e2_4674_a671_a867be96e616.slice/crio-b70ad654f97d1ca0c4ce56bf4ae30fcb840c7f67abf281a387da1d4eea61a37b WatchSource:0}: Error finding container b70ad654f97d1ca0c4ce56bf4ae30fcb840c7f67abf281a387da1d4eea61a37b: Status 404 returned error can't find the container with id b70ad654f97d1ca0c4ce56bf4ae30fcb840c7f67abf281a387da1d4eea61a37b Apr 24 14:34:52.326306 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:52.326019 2570 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 24 14:34:52.405166 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:52.405142 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-71827-predictor-5df7cd677f-v5rws"] Apr 24 14:34:52.407902 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:34:52.407877 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fff380d_f461_478a_9ead_b38373d06339.slice/crio-c4768984c38f6b05f8d661d8d600a2f6434ac247e6122a743df93167c3e77ce1 WatchSource:0}: Error finding container c4768984c38f6b05f8d661d8d600a2f6434ac247e6122a743df93167c3e77ce1: Status 404 returned error can't find the container with id c4768984c38f6b05f8d661d8d600a2f6434ac247e6122a743df93167c3e77ce1 Apr 24 14:34:52.448586 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:52.448558 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-71827-predictor-5df7cd677f-v5rws" event={"ID":"6fff380d-f461-478a-9ead-b38373d06339","Type":"ContainerStarted","Data":"c4768984c38f6b05f8d661d8d600a2f6434ac247e6122a743df93167c3e77ce1"} Apr 24 14:34:52.449814 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:52.449788 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-71827-predictor-6cd7d94bb6-bvhmq" event={"ID":"2f60c23a-86e2-4674-a671-a867be96e616","Type":"ContainerStarted","Data":"2694928c9a85dee4066ffc11e600b6ae098507fc353a5499a9beae080bd19c8f"} Apr 24 14:34:52.449919 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:52.449817 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-71827-predictor-6cd7d94bb6-bvhmq" event={"ID":"2f60c23a-86e2-4674-a671-a867be96e616","Type":"ContainerStarted","Data":"b70ad654f97d1ca0c4ce56bf4ae30fcb840c7f67abf281a387da1d4eea61a37b"} Apr 24 14:34:52.450030 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:52.450005 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-71827-predictor-6cd7d94bb6-bvhmq" Apr 24 14:34:52.451018 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:52.450997 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-71827-predictor-6cd7d94bb6-bvhmq" podUID="2f60c23a-86e2-4674-a671-a867be96e616" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.24:8080: connect: connection refused" Apr 24 14:34:52.465332 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:52.465292 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-71827-predictor-6cd7d94bb6-bvhmq" podStartSLOduration=0.465260541 podStartE2EDuration="465.260541ms" podCreationTimestamp="2026-04-24 14:34:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 14:34:52.463971608 +0000 UTC m=+670.589375423" watchObservedRunningTime="2026-04-24 14:34:52.465260541 +0000 UTC m=+670.590664356" Apr 24 14:34:53.453998 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:53.453959 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-71827-predictor-5df7cd677f-v5rws" event={"ID":"6fff380d-f461-478a-9ead-b38373d06339","Type":"ContainerStarted","Data":"46ee565da0adfba7e3177071df26af41bce8f84bb2210c9ed747df7aeddc60d3"} Apr 24 14:34:53.454445 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:53.454211 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-71827-predictor-5df7cd677f-v5rws" Apr 24 14:34:53.454445 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:53.454394 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-71827-predictor-6cd7d94bb6-bvhmq" podUID="2f60c23a-86e2-4674-a671-a867be96e616" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.24:8080: connect: connection refused" Apr 24 14:34:53.455371 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:53.455347 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-71827-predictor-5df7cd677f-v5rws" podUID="6fff380d-f461-478a-9ead-b38373d06339" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.25:8080: connect: connection refused" Apr 24 14:34:53.469583 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:53.469542 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/error-404-isvc-71827-predictor-5df7cd677f-v5rws" podStartSLOduration=1.469507159 podStartE2EDuration="1.469507159s" podCreationTimestamp="2026-04-24 14:34:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 14:34:53.46790871 +0000 UTC m=+671.593312525" watchObservedRunningTime="2026-04-24 14:34:53.469507159 +0000 UTC m=+671.594910973" Apr 24 14:34:54.330158 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:54.330123 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-96b39-predictor-c56f676c4-v9szb" podUID="8a931de9-a88f-406c-b48d-f196a475fd94" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.22:8080: connect: connection refused" Apr 24 14:34:54.330346 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:54.330097 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-96b39-predictor-f7d9bd579-jz7pz" podUID="6479872c-3320-4186-bddd-beb84d015947" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.23:8080: connect: connection refused" Apr 24 14:34:54.457687 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:54.457646 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-71827-predictor-5df7cd677f-v5rws" podUID="6fff380d-f461-478a-9ead-b38373d06339" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.25:8080: connect: connection refused" Apr 24 14:34:56.162239 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:56.162192 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-784cb989b8-j9rbj" podUID="f65fd936-4a65-48ec-b909-5d3b58fc41f5" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.20:8080: connect: connection refused" Apr 24 14:34:56.816042 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:56.816018 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-784cb989b8-j9rbj" Apr 24 14:34:56.881726 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:56.881640 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/f65fd936-4a65-48ec-b909-5d3b58fc41f5-kserve-provision-location\") pod \"f65fd936-4a65-48ec-b909-5d3b58fc41f5\" (UID: \"f65fd936-4a65-48ec-b909-5d3b58fc41f5\") " Apr 24 14:34:56.881993 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:56.881968 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65fd936-4a65-48ec-b909-5d3b58fc41f5-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "f65fd936-4a65-48ec-b909-5d3b58fc41f5" (UID: "f65fd936-4a65-48ec-b909-5d3b58fc41f5"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 24 14:34:56.982784 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:56.982741 2570 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/f65fd936-4a65-48ec-b909-5d3b58fc41f5-kserve-provision-location\") on node \"ip-10-0-129-34.ec2.internal\" DevicePath \"\"" Apr 24 14:34:57.466627 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:57.466594 2570 generic.go:358] "Generic (PLEG): container finished" podID="f65fd936-4a65-48ec-b909-5d3b58fc41f5" containerID="7a608afcaa710e5cb4f75423b94bfaa976c349fcf01bf8106da0591f84e988fc" exitCode=0 Apr 24 14:34:57.467058 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:57.466660 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-784cb989b8-j9rbj" Apr 24 14:34:57.467058 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:57.466675 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-784cb989b8-j9rbj" event={"ID":"f65fd936-4a65-48ec-b909-5d3b58fc41f5","Type":"ContainerDied","Data":"7a608afcaa710e5cb4f75423b94bfaa976c349fcf01bf8106da0591f84e988fc"} Apr 24 14:34:57.467058 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:57.466713 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-784cb989b8-j9rbj" event={"ID":"f65fd936-4a65-48ec-b909-5d3b58fc41f5","Type":"ContainerDied","Data":"e94974ae010a9ab7f0b340596eedf68f404268ff5c625d829d6cf650e86c510a"} Apr 24 14:34:57.467058 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:57.466730 2570 scope.go:117] "RemoveContainer" containerID="7a608afcaa710e5cb4f75423b94bfaa976c349fcf01bf8106da0591f84e988fc" Apr 24 14:34:57.474579 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:57.474564 2570 scope.go:117] "RemoveContainer" containerID="774cd204b7dd845e87c57816a23a4fa4e9d657b532cdcfae73acfefc8bc32c87" Apr 24 14:34:57.481772 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:57.481754 2570 scope.go:117] "RemoveContainer" containerID="7a608afcaa710e5cb4f75423b94bfaa976c349fcf01bf8106da0591f84e988fc" Apr 24 14:34:57.482072 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:34:57.482052 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a608afcaa710e5cb4f75423b94bfaa976c349fcf01bf8106da0591f84e988fc\": container with ID starting with 7a608afcaa710e5cb4f75423b94bfaa976c349fcf01bf8106da0591f84e988fc not found: ID does not exist" containerID="7a608afcaa710e5cb4f75423b94bfaa976c349fcf01bf8106da0591f84e988fc" Apr 24 14:34:57.482137 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:57.482082 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a608afcaa710e5cb4f75423b94bfaa976c349fcf01bf8106da0591f84e988fc"} err="failed to get container status \"7a608afcaa710e5cb4f75423b94bfaa976c349fcf01bf8106da0591f84e988fc\": rpc error: code = NotFound desc = could not find container \"7a608afcaa710e5cb4f75423b94bfaa976c349fcf01bf8106da0591f84e988fc\": container with ID starting with 7a608afcaa710e5cb4f75423b94bfaa976c349fcf01bf8106da0591f84e988fc not found: ID does not exist" Apr 24 14:34:57.482252 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:57.482237 2570 scope.go:117] "RemoveContainer" containerID="774cd204b7dd845e87c57816a23a4fa4e9d657b532cdcfae73acfefc8bc32c87" Apr 24 14:34:57.482652 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:34:57.482573 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"774cd204b7dd845e87c57816a23a4fa4e9d657b532cdcfae73acfefc8bc32c87\": container with ID starting with 774cd204b7dd845e87c57816a23a4fa4e9d657b532cdcfae73acfefc8bc32c87 not found: ID does not exist" containerID="774cd204b7dd845e87c57816a23a4fa4e9d657b532cdcfae73acfefc8bc32c87" Apr 24 14:34:57.482652 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:57.482604 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"774cd204b7dd845e87c57816a23a4fa4e9d657b532cdcfae73acfefc8bc32c87"} err="failed to get container status \"774cd204b7dd845e87c57816a23a4fa4e9d657b532cdcfae73acfefc8bc32c87\": rpc error: code = NotFound desc = could not find container \"774cd204b7dd845e87c57816a23a4fa4e9d657b532cdcfae73acfefc8bc32c87\": container with ID starting with 774cd204b7dd845e87c57816a23a4fa4e9d657b532cdcfae73acfefc8bc32c87 not found: ID does not exist" Apr 24 14:34:57.488961 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:57.488910 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-predictor-784cb989b8-j9rbj"] Apr 24 14:34:57.490879 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:57.490544 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-predictor-784cb989b8-j9rbj"] Apr 24 14:34:57.876319 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:57.876295 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-54dccdcc4-n2tq7" Apr 24 14:34:57.993801 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:57.993759 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/c9618735-5632-4e5e-8623-0a3cfc9508f8-kserve-provision-location\") pod \"c9618735-5632-4e5e-8623-0a3cfc9508f8\" (UID: \"c9618735-5632-4e5e-8623-0a3cfc9508f8\") " Apr 24 14:34:57.994114 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:57.994074 2570 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9618735-5632-4e5e-8623-0a3cfc9508f8-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "c9618735-5632-4e5e-8623-0a3cfc9508f8" (UID: "c9618735-5632-4e5e-8623-0a3cfc9508f8"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 24 14:34:58.095219 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:58.095140 2570 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/c9618735-5632-4e5e-8623-0a3cfc9508f8-kserve-provision-location\") on node \"ip-10-0-129-34.ec2.internal\" DevicePath \"\"" Apr 24 14:34:58.428652 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:58.428617 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65fd936-4a65-48ec-b909-5d3b58fc41f5" path="/var/lib/kubelet/pods/f65fd936-4a65-48ec-b909-5d3b58fc41f5/volumes" Apr 24 14:34:58.473468 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:58.473429 2570 generic.go:358] "Generic (PLEG): container finished" podID="c9618735-5632-4e5e-8623-0a3cfc9508f8" containerID="09c00942d4d576d11362320faac5ea320f8f72fafdbaf9402769ef16949b3cdf" exitCode=0 Apr 24 14:34:58.473924 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:58.473477 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-54dccdcc4-n2tq7" event={"ID":"c9618735-5632-4e5e-8623-0a3cfc9508f8","Type":"ContainerDied","Data":"09c00942d4d576d11362320faac5ea320f8f72fafdbaf9402769ef16949b3cdf"} Apr 24 14:34:58.473924 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:58.473502 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-54dccdcc4-n2tq7" Apr 24 14:34:58.473924 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:58.473522 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-54dccdcc4-n2tq7" event={"ID":"c9618735-5632-4e5e-8623-0a3cfc9508f8","Type":"ContainerDied","Data":"845b8e5603e6afa079c618829e89b09b68181b99db397f0eedb9bfad387a045c"} Apr 24 14:34:58.473924 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:58.473543 2570 scope.go:117] "RemoveContainer" containerID="09c00942d4d576d11362320faac5ea320f8f72fafdbaf9402769ef16949b3cdf" Apr 24 14:34:58.481436 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:58.481363 2570 scope.go:117] "RemoveContainer" containerID="249d1122a833f9ab544d6ca82d6ba6eeba2928cafe0d6150f5ab3f09eadc661a" Apr 24 14:34:58.488865 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:58.488846 2570 scope.go:117] "RemoveContainer" containerID="09c00942d4d576d11362320faac5ea320f8f72fafdbaf9402769ef16949b3cdf" Apr 24 14:34:58.490077 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:34:58.490047 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09c00942d4d576d11362320faac5ea320f8f72fafdbaf9402769ef16949b3cdf\": container with ID starting with 09c00942d4d576d11362320faac5ea320f8f72fafdbaf9402769ef16949b3cdf not found: ID does not exist" containerID="09c00942d4d576d11362320faac5ea320f8f72fafdbaf9402769ef16949b3cdf" Apr 24 14:34:58.490218 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:58.490088 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09c00942d4d576d11362320faac5ea320f8f72fafdbaf9402769ef16949b3cdf"} err="failed to get container status \"09c00942d4d576d11362320faac5ea320f8f72fafdbaf9402769ef16949b3cdf\": rpc error: code = NotFound desc = could not find container \"09c00942d4d576d11362320faac5ea320f8f72fafdbaf9402769ef16949b3cdf\": container with ID starting with 09c00942d4d576d11362320faac5ea320f8f72fafdbaf9402769ef16949b3cdf not found: ID does not exist" Apr 24 14:34:58.490218 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:58.490199 2570 scope.go:117] "RemoveContainer" containerID="249d1122a833f9ab544d6ca82d6ba6eeba2928cafe0d6150f5ab3f09eadc661a" Apr 24 14:34:58.490519 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:34:58.490495 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"249d1122a833f9ab544d6ca82d6ba6eeba2928cafe0d6150f5ab3f09eadc661a\": container with ID starting with 249d1122a833f9ab544d6ca82d6ba6eeba2928cafe0d6150f5ab3f09eadc661a not found: ID does not exist" containerID="249d1122a833f9ab544d6ca82d6ba6eeba2928cafe0d6150f5ab3f09eadc661a" Apr 24 14:34:58.490605 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:58.490527 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"249d1122a833f9ab544d6ca82d6ba6eeba2928cafe0d6150f5ab3f09eadc661a"} err="failed to get container status \"249d1122a833f9ab544d6ca82d6ba6eeba2928cafe0d6150f5ab3f09eadc661a\": rpc error: code = NotFound desc = could not find container \"249d1122a833f9ab544d6ca82d6ba6eeba2928cafe0d6150f5ab3f09eadc661a\": container with ID starting with 249d1122a833f9ab544d6ca82d6ba6eeba2928cafe0d6150f5ab3f09eadc661a not found: ID does not exist" Apr 24 14:34:58.492228 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:58.492204 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-54dccdcc4-n2tq7"] Apr 24 14:34:58.496662 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:34:58.496638 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-54dccdcc4-n2tq7"] Apr 24 14:35:00.428300 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:35:00.428262 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9618735-5632-4e5e-8623-0a3cfc9508f8" path="/var/lib/kubelet/pods/c9618735-5632-4e5e-8623-0a3cfc9508f8/volumes" Apr 24 14:35:03.454627 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:35:03.454587 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-71827-predictor-6cd7d94bb6-bvhmq" podUID="2f60c23a-86e2-4674-a671-a867be96e616" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.24:8080: connect: connection refused" Apr 24 14:35:04.330741 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:35:04.330710 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-96b39-predictor-f7d9bd579-jz7pz" Apr 24 14:35:04.331164 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:35:04.331143 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-96b39-predictor-c56f676c4-v9szb" Apr 24 14:35:04.457763 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:35:04.457719 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-71827-predictor-5df7cd677f-v5rws" podUID="6fff380d-f461-478a-9ead-b38373d06339" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.25:8080: connect: connection refused" Apr 24 14:35:13.454428 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:35:13.454381 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-71827-predictor-6cd7d94bb6-bvhmq" podUID="2f60c23a-86e2-4674-a671-a867be96e616" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.24:8080: connect: connection refused" Apr 24 14:35:14.457881 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:35:14.457829 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-71827-predictor-5df7cd677f-v5rws" podUID="6fff380d-f461-478a-9ead-b38373d06339" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.25:8080: connect: connection refused" Apr 24 14:35:23.454716 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:35:23.454667 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-71827-predictor-6cd7d94bb6-bvhmq" podUID="2f60c23a-86e2-4674-a671-a867be96e616" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.24:8080: connect: connection refused" Apr 24 14:35:24.457739 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:35:24.457693 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-71827-predictor-5df7cd677f-v5rws" podUID="6fff380d-f461-478a-9ead-b38373d06339" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.25:8080: connect: connection refused" Apr 24 14:35:33.455256 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:35:33.455163 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-71827-predictor-6cd7d94bb6-bvhmq" podUID="2f60c23a-86e2-4674-a671-a867be96e616" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.24:8080: connect: connection refused" Apr 24 14:35:34.458118 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:35:34.458057 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-71827-predictor-5df7cd677f-v5rws" podUID="6fff380d-f461-478a-9ead-b38373d06339" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.25:8080: connect: connection refused" Apr 24 14:35:43.455655 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:35:43.455619 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-71827-predictor-6cd7d94bb6-bvhmq" Apr 24 14:35:44.459285 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:35:44.459251 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-71827-predictor-5df7cd677f-v5rws" Apr 24 14:38:42.380340 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:38:42.380264 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dz989_090e3afb-c111-4bf0-a107-0156c2f3a0f2/ovn-acl-logging/0.log" Apr 24 14:38:42.382071 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:38:42.382049 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dz989_090e3afb-c111-4bf0-a107-0156c2f3a0f2/ovn-acl-logging/0.log" Apr 24 14:43:36.985957 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:36.985922 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-96b39-predictor-c56f676c4-v9szb"] Apr 24 14:43:36.988421 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:36.986177 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-96b39-predictor-c56f676c4-v9szb" podUID="8a931de9-a88f-406c-b48d-f196a475fd94" containerName="kserve-container" containerID="cri-o://d4555378d73b800311049ff81cd3c7d6817cd20781bec554850e3db05d76464e" gracePeriod=30 Apr 24 14:43:37.022759 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:37.022728 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-0cfc3-predictor-54bcbf5466-s6kjh"] Apr 24 14:43:37.023033 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:37.023021 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c9618735-5632-4e5e-8623-0a3cfc9508f8" containerName="storage-initializer" Apr 24 14:43:37.023078 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:37.023034 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9618735-5632-4e5e-8623-0a3cfc9508f8" containerName="storage-initializer" Apr 24 14:43:37.023078 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:37.023042 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f65fd936-4a65-48ec-b909-5d3b58fc41f5" containerName="kserve-container" Apr 24 14:43:37.023078 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:37.023048 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="f65fd936-4a65-48ec-b909-5d3b58fc41f5" containerName="kserve-container" Apr 24 14:43:37.023078 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:37.023058 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f65fd936-4a65-48ec-b909-5d3b58fc41f5" containerName="storage-initializer" Apr 24 14:43:37.023078 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:37.023063 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="f65fd936-4a65-48ec-b909-5d3b58fc41f5" containerName="storage-initializer" Apr 24 14:43:37.023078 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:37.023072 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c9618735-5632-4e5e-8623-0a3cfc9508f8" containerName="kserve-container" Apr 24 14:43:37.023078 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:37.023077 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9618735-5632-4e5e-8623-0a3cfc9508f8" containerName="kserve-container" Apr 24 14:43:37.023310 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:37.023154 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="f65fd936-4a65-48ec-b909-5d3b58fc41f5" containerName="kserve-container" Apr 24 14:43:37.023310 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:37.023164 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="c9618735-5632-4e5e-8623-0a3cfc9508f8" containerName="kserve-container" Apr 24 14:43:37.026097 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:37.026077 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-0cfc3-predictor-54bcbf5466-s6kjh" Apr 24 14:43:37.035126 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:37.035089 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-0cfc3-predictor-54bcbf5466-s6kjh" Apr 24 14:43:37.038293 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:37.038262 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-0cfc3-predictor-54bcbf5466-s6kjh"] Apr 24 14:43:37.069313 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:37.069276 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-96b39-predictor-f7d9bd579-jz7pz"] Apr 24 14:43:37.069623 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:37.069583 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-96b39-predictor-f7d9bd579-jz7pz" podUID="6479872c-3320-4186-bddd-beb84d015947" containerName="kserve-container" containerID="cri-o://6d9b061eaa7a3ec7f7fd916e8fd26491ece79057bf239588a980e5392a66d114" gracePeriod=30 Apr 24 14:43:37.098259 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:37.098226 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-0cfc3-predictor-5f76954b8c-4ft79"] Apr 24 14:43:37.105994 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:37.105481 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-0cfc3-predictor-5f76954b8c-4ft79" Apr 24 14:43:37.108532 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:37.108479 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-0cfc3-predictor-5f76954b8c-4ft79"] Apr 24 14:43:37.118251 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:37.118234 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-0cfc3-predictor-5f76954b8c-4ft79" Apr 24 14:43:37.173512 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:37.173232 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-0cfc3-predictor-54bcbf5466-s6kjh"] Apr 24 14:43:37.179693 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:43:37.179638 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda24f9743_ed88_40c5_a792_732789d9aedd.slice/crio-5bf56d95589d53b8b6360f506368a806cb90b990986937d1857c7c7297f627cd WatchSource:0}: Error finding container 5bf56d95589d53b8b6360f506368a806cb90b990986937d1857c7c7297f627cd: Status 404 returned error can't find the container with id 5bf56d95589d53b8b6360f506368a806cb90b990986937d1857c7c7297f627cd Apr 24 14:43:37.182084 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:37.181845 2570 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 24 14:43:37.248383 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:37.248355 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-0cfc3-predictor-5f76954b8c-4ft79"] Apr 24 14:43:37.251116 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:43:37.251074 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod223202e5_c32d_4552_a13e_219de7f44444.slice/crio-2dcba1cdf992a5192f801d509f51719c4ff500b3ad11061ee07fd890e711d70e WatchSource:0}: Error finding container 2dcba1cdf992a5192f801d509f51719c4ff500b3ad11061ee07fd890e711d70e: Status 404 returned error can't find the container with id 2dcba1cdf992a5192f801d509f51719c4ff500b3ad11061ee07fd890e711d70e Apr 24 14:43:37.933297 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:37.933261 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-0cfc3-predictor-54bcbf5466-s6kjh" event={"ID":"a24f9743-ed88-40c5-a792-732789d9aedd","Type":"ContainerStarted","Data":"4f6e5a1c5969a7c348cab7daefef2fdb85303821d0f97fbe5bd5e0f05c76ba65"} Apr 24 14:43:37.933297 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:37.933302 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-0cfc3-predictor-54bcbf5466-s6kjh" event={"ID":"a24f9743-ed88-40c5-a792-732789d9aedd","Type":"ContainerStarted","Data":"5bf56d95589d53b8b6360f506368a806cb90b990986937d1857c7c7297f627cd"} Apr 24 14:43:37.933565 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:37.933430 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-0cfc3-predictor-54bcbf5466-s6kjh" Apr 24 14:43:37.934597 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:37.934566 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-0cfc3-predictor-5f76954b8c-4ft79" event={"ID":"223202e5-c32d-4552-a13e-219de7f44444","Type":"ContainerStarted","Data":"634e1d6d42da3568b7c0e52ff6393ab394507fba1930249daf5192964960af9e"} Apr 24 14:43:37.934597 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:37.934595 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-0cfc3-predictor-5f76954b8c-4ft79" event={"ID":"223202e5-c32d-4552-a13e-219de7f44444","Type":"ContainerStarted","Data":"2dcba1cdf992a5192f801d509f51719c4ff500b3ad11061ee07fd890e711d70e"} Apr 24 14:43:37.934754 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:37.934686 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-0cfc3-predictor-54bcbf5466-s6kjh" podUID="a24f9743-ed88-40c5-a792-732789d9aedd" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.26:8080: connect: connection refused" Apr 24 14:43:37.934790 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:37.934773 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-0cfc3-predictor-5f76954b8c-4ft79" Apr 24 14:43:37.935647 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:37.935624 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-0cfc3-predictor-5f76954b8c-4ft79" podUID="223202e5-c32d-4552-a13e-219de7f44444" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.27:8080: connect: connection refused" Apr 24 14:43:37.947766 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:37.947723 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-0cfc3-predictor-54bcbf5466-s6kjh" podStartSLOduration=0.947711684 podStartE2EDuration="947.711684ms" podCreationTimestamp="2026-04-24 14:43:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 14:43:37.946961911 +0000 UTC m=+1196.072365726" watchObservedRunningTime="2026-04-24 14:43:37.947711684 +0000 UTC m=+1196.073115576" Apr 24 14:43:37.962483 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:37.962444 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/error-404-isvc-0cfc3-predictor-5f76954b8c-4ft79" podStartSLOduration=0.962429596 podStartE2EDuration="962.429596ms" podCreationTimestamp="2026-04-24 14:43:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 14:43:37.961592008 +0000 UTC m=+1196.086995824" watchObservedRunningTime="2026-04-24 14:43:37.962429596 +0000 UTC m=+1196.087833410" Apr 24 14:43:38.937443 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:38.937407 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-0cfc3-predictor-5f76954b8c-4ft79" podUID="223202e5-c32d-4552-a13e-219de7f44444" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.27:8080: connect: connection refused" Apr 24 14:43:38.937831 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:38.937408 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-0cfc3-predictor-54bcbf5466-s6kjh" podUID="a24f9743-ed88-40c5-a792-732789d9aedd" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.26:8080: connect: connection refused" Apr 24 14:43:40.228125 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:40.228089 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-96b39-predictor-c56f676c4-v9szb" Apr 24 14:43:40.899791 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:40.899770 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-96b39-predictor-f7d9bd579-jz7pz" Apr 24 14:43:40.945032 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:40.944955 2570 generic.go:358] "Generic (PLEG): container finished" podID="6479872c-3320-4186-bddd-beb84d015947" containerID="6d9b061eaa7a3ec7f7fd916e8fd26491ece79057bf239588a980e5392a66d114" exitCode=0 Apr 24 14:43:40.945032 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:40.945012 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-96b39-predictor-f7d9bd579-jz7pz" Apr 24 14:43:40.945257 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:40.945031 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-96b39-predictor-f7d9bd579-jz7pz" event={"ID":"6479872c-3320-4186-bddd-beb84d015947","Type":"ContainerDied","Data":"6d9b061eaa7a3ec7f7fd916e8fd26491ece79057bf239588a980e5392a66d114"} Apr 24 14:43:40.945257 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:40.945063 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-96b39-predictor-f7d9bd579-jz7pz" event={"ID":"6479872c-3320-4186-bddd-beb84d015947","Type":"ContainerDied","Data":"4d5aa9263f61e42a2d4dc7822d5b6dc2c5d1fc5c0aa696f71fb7b7248a5a5eaa"} Apr 24 14:43:40.945257 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:40.945080 2570 scope.go:117] "RemoveContainer" containerID="6d9b061eaa7a3ec7f7fd916e8fd26491ece79057bf239588a980e5392a66d114" Apr 24 14:43:40.946182 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:40.946160 2570 generic.go:358] "Generic (PLEG): container finished" podID="8a931de9-a88f-406c-b48d-f196a475fd94" containerID="d4555378d73b800311049ff81cd3c7d6817cd20781bec554850e3db05d76464e" exitCode=0 Apr 24 14:43:40.946279 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:40.946220 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-96b39-predictor-c56f676c4-v9szb" event={"ID":"8a931de9-a88f-406c-b48d-f196a475fd94","Type":"ContainerDied","Data":"d4555378d73b800311049ff81cd3c7d6817cd20781bec554850e3db05d76464e"} Apr 24 14:43:40.946279 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:40.946244 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-96b39-predictor-c56f676c4-v9szb" event={"ID":"8a931de9-a88f-406c-b48d-f196a475fd94","Type":"ContainerDied","Data":"fec9dcd1e4a992783b2320977dff4c82bce1f334b5d6f82bd529f5d0b75a6234"} Apr 24 14:43:40.946279 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:40.946247 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-96b39-predictor-c56f676c4-v9szb" Apr 24 14:43:40.952939 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:40.952923 2570 scope.go:117] "RemoveContainer" containerID="6d9b061eaa7a3ec7f7fd916e8fd26491ece79057bf239588a980e5392a66d114" Apr 24 14:43:40.953186 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:43:40.953169 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d9b061eaa7a3ec7f7fd916e8fd26491ece79057bf239588a980e5392a66d114\": container with ID starting with 6d9b061eaa7a3ec7f7fd916e8fd26491ece79057bf239588a980e5392a66d114 not found: ID does not exist" containerID="6d9b061eaa7a3ec7f7fd916e8fd26491ece79057bf239588a980e5392a66d114" Apr 24 14:43:40.953251 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:40.953193 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d9b061eaa7a3ec7f7fd916e8fd26491ece79057bf239588a980e5392a66d114"} err="failed to get container status \"6d9b061eaa7a3ec7f7fd916e8fd26491ece79057bf239588a980e5392a66d114\": rpc error: code = NotFound desc = could not find container \"6d9b061eaa7a3ec7f7fd916e8fd26491ece79057bf239588a980e5392a66d114\": container with ID starting with 6d9b061eaa7a3ec7f7fd916e8fd26491ece79057bf239588a980e5392a66d114 not found: ID does not exist" Apr 24 14:43:40.953251 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:40.953209 2570 scope.go:117] "RemoveContainer" containerID="d4555378d73b800311049ff81cd3c7d6817cd20781bec554850e3db05d76464e" Apr 24 14:43:40.959731 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:40.959702 2570 scope.go:117] "RemoveContainer" containerID="d4555378d73b800311049ff81cd3c7d6817cd20781bec554850e3db05d76464e" Apr 24 14:43:40.959991 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:43:40.959969 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4555378d73b800311049ff81cd3c7d6817cd20781bec554850e3db05d76464e\": container with ID starting with d4555378d73b800311049ff81cd3c7d6817cd20781bec554850e3db05d76464e not found: ID does not exist" containerID="d4555378d73b800311049ff81cd3c7d6817cd20781bec554850e3db05d76464e" Apr 24 14:43:40.960077 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:40.960001 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4555378d73b800311049ff81cd3c7d6817cd20781bec554850e3db05d76464e"} err="failed to get container status \"d4555378d73b800311049ff81cd3c7d6817cd20781bec554850e3db05d76464e\": rpc error: code = NotFound desc = could not find container \"d4555378d73b800311049ff81cd3c7d6817cd20781bec554850e3db05d76464e\": container with ID starting with d4555378d73b800311049ff81cd3c7d6817cd20781bec554850e3db05d76464e not found: ID does not exist" Apr 24 14:43:40.960749 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:40.960731 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-96b39-predictor-c56f676c4-v9szb"] Apr 24 14:43:40.964334 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:40.964316 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-96b39-predictor-c56f676c4-v9szb"] Apr 24 14:43:40.974396 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:40.974377 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-96b39-predictor-f7d9bd579-jz7pz"] Apr 24 14:43:40.976679 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:40.976661 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-96b39-predictor-f7d9bd579-jz7pz"] Apr 24 14:43:42.400663 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:42.400631 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dz989_090e3afb-c111-4bf0-a107-0156c2f3a0f2/ovn-acl-logging/0.log" Apr 24 14:43:42.402929 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:42.402910 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dz989_090e3afb-c111-4bf0-a107-0156c2f3a0f2/ovn-acl-logging/0.log" Apr 24 14:43:42.428892 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:42.428860 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6479872c-3320-4186-bddd-beb84d015947" path="/var/lib/kubelet/pods/6479872c-3320-4186-bddd-beb84d015947/volumes" Apr 24 14:43:42.429389 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:42.429369 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a931de9-a88f-406c-b48d-f196a475fd94" path="/var/lib/kubelet/pods/8a931de9-a88f-406c-b48d-f196a475fd94/volumes" Apr 24 14:43:48.938308 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:48.938269 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-0cfc3-predictor-54bcbf5466-s6kjh" podUID="a24f9743-ed88-40c5-a792-732789d9aedd" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.26:8080: connect: connection refused" Apr 24 14:43:48.938690 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:48.938276 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-0cfc3-predictor-5f76954b8c-4ft79" podUID="223202e5-c32d-4552-a13e-219de7f44444" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.27:8080: connect: connection refused" Apr 24 14:43:58.937768 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:58.937725 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-0cfc3-predictor-54bcbf5466-s6kjh" podUID="a24f9743-ed88-40c5-a792-732789d9aedd" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.26:8080: connect: connection refused" Apr 24 14:43:58.938164 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:43:58.937730 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-0cfc3-predictor-5f76954b8c-4ft79" podUID="223202e5-c32d-4552-a13e-219de7f44444" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.27:8080: connect: connection refused" Apr 24 14:44:08.937984 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:08.937938 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-0cfc3-predictor-5f76954b8c-4ft79" podUID="223202e5-c32d-4552-a13e-219de7f44444" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.27:8080: connect: connection refused" Apr 24 14:44:08.937984 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:08.937953 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-0cfc3-predictor-54bcbf5466-s6kjh" podUID="a24f9743-ed88-40c5-a792-732789d9aedd" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.26:8080: connect: connection refused" Apr 24 14:44:16.993608 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:16.993574 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-71827-predictor-6cd7d94bb6-bvhmq"] Apr 24 14:44:16.993990 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:16.993820 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-71827-predictor-6cd7d94bb6-bvhmq" podUID="2f60c23a-86e2-4674-a671-a867be96e616" containerName="kserve-container" containerID="cri-o://2694928c9a85dee4066ffc11e600b6ae098507fc353a5499a9beae080bd19c8f" gracePeriod=30 Apr 24 14:44:17.005561 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:17.005528 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-5e366-predictor-5878ff6f5d-xxlgz"] Apr 24 14:44:17.005860 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:17.005844 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6479872c-3320-4186-bddd-beb84d015947" containerName="kserve-container" Apr 24 14:44:17.005953 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:17.005863 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="6479872c-3320-4186-bddd-beb84d015947" containerName="kserve-container" Apr 24 14:44:17.005953 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:17.005876 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8a931de9-a88f-406c-b48d-f196a475fd94" containerName="kserve-container" Apr 24 14:44:17.005953 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:17.005937 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a931de9-a88f-406c-b48d-f196a475fd94" containerName="kserve-container" Apr 24 14:44:17.006095 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:17.006025 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="8a931de9-a88f-406c-b48d-f196a475fd94" containerName="kserve-container" Apr 24 14:44:17.006095 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:17.006039 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="6479872c-3320-4186-bddd-beb84d015947" containerName="kserve-container" Apr 24 14:44:17.008962 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:17.008943 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-5e366-predictor-5878ff6f5d-xxlgz" Apr 24 14:44:17.016323 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:17.016303 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-5e366-predictor-5878ff6f5d-xxlgz"] Apr 24 14:44:17.018634 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:17.018619 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-5e366-predictor-5878ff6f5d-xxlgz" Apr 24 14:44:17.055165 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:17.055132 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-71827-predictor-5df7cd677f-v5rws"] Apr 24 14:44:17.055467 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:17.055438 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-71827-predictor-5df7cd677f-v5rws" podUID="6fff380d-f461-478a-9ead-b38373d06339" containerName="kserve-container" containerID="cri-o://46ee565da0adfba7e3177071df26af41bce8f84bb2210c9ed747df7aeddc60d3" gracePeriod=30 Apr 24 14:44:17.088316 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:17.087413 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-5e366-predictor-6c46fdfff9-ngkdt"] Apr 24 14:44:17.092414 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:17.092387 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-5e366-predictor-6c46fdfff9-ngkdt" Apr 24 14:44:17.095857 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:17.095819 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-5e366-predictor-6c46fdfff9-ngkdt"] Apr 24 14:44:17.104384 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:17.104365 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-5e366-predictor-6c46fdfff9-ngkdt" Apr 24 14:44:17.151319 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:17.151284 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-5e366-predictor-5878ff6f5d-xxlgz"] Apr 24 14:44:17.154246 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:44:17.154213 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0f71684_5c0d_4bff_80a8_787d65b1318c.slice/crio-fc1352b762935e75db7a9c6b0579cf30b75572f1da45700ad3ae19f871d9918b WatchSource:0}: Error finding container fc1352b762935e75db7a9c6b0579cf30b75572f1da45700ad3ae19f871d9918b: Status 404 returned error can't find the container with id fc1352b762935e75db7a9c6b0579cf30b75572f1da45700ad3ae19f871d9918b Apr 24 14:44:17.232517 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:17.232444 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-5e366-predictor-6c46fdfff9-ngkdt"] Apr 24 14:44:17.234821 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:44:17.234794 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bad82f6_4e93_448e_ad5f_e78cd014418e.slice/crio-0a9f2bd9ef2aa36f94fc0a5d62580ccf1bc36d781bfb1e174cf1f66dd9f40c00 WatchSource:0}: Error finding container 0a9f2bd9ef2aa36f94fc0a5d62580ccf1bc36d781bfb1e174cf1f66dd9f40c00: Status 404 returned error can't find the container with id 0a9f2bd9ef2aa36f94fc0a5d62580ccf1bc36d781bfb1e174cf1f66dd9f40c00 Apr 24 14:44:18.056861 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:18.056820 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-5e366-predictor-6c46fdfff9-ngkdt" event={"ID":"6bad82f6-4e93-448e-ad5f-e78cd014418e","Type":"ContainerStarted","Data":"51167b21e110b3a9ecaa3b65557f9740807102e177584c51f7ca92ae1d170212"} Apr 24 14:44:18.057312 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:18.056868 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-5e366-predictor-6c46fdfff9-ngkdt" event={"ID":"6bad82f6-4e93-448e-ad5f-e78cd014418e","Type":"ContainerStarted","Data":"0a9f2bd9ef2aa36f94fc0a5d62580ccf1bc36d781bfb1e174cf1f66dd9f40c00"} Apr 24 14:44:18.057312 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:18.057035 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-5e366-predictor-6c46fdfff9-ngkdt" Apr 24 14:44:18.058243 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:18.058201 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-5e366-predictor-6c46fdfff9-ngkdt" podUID="6bad82f6-4e93-448e-ad5f-e78cd014418e" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.29:8080: connect: connection refused" Apr 24 14:44:18.058388 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:18.058363 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-5e366-predictor-5878ff6f5d-xxlgz" event={"ID":"d0f71684-5c0d-4bff-80a8-787d65b1318c","Type":"ContainerStarted","Data":"5830a04b666e9666f13a734e4ee27f31766dc507431c88a985eaf20cc8f8cd55"} Apr 24 14:44:18.058388 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:18.058391 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-5e366-predictor-5878ff6f5d-xxlgz" event={"ID":"d0f71684-5c0d-4bff-80a8-787d65b1318c","Type":"ContainerStarted","Data":"fc1352b762935e75db7a9c6b0579cf30b75572f1da45700ad3ae19f871d9918b"} Apr 24 14:44:18.058597 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:18.058585 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-5e366-predictor-5878ff6f5d-xxlgz" Apr 24 14:44:18.059457 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:18.059433 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-5e366-predictor-5878ff6f5d-xxlgz" podUID="d0f71684-5c0d-4bff-80a8-787d65b1318c" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.28:8080: connect: connection refused" Apr 24 14:44:18.071159 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:18.071123 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/error-404-isvc-5e366-predictor-6c46fdfff9-ngkdt" podStartSLOduration=1.071111954 podStartE2EDuration="1.071111954s" podCreationTimestamp="2026-04-24 14:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 14:44:18.069980059 +0000 UTC m=+1236.195383873" watchObservedRunningTime="2026-04-24 14:44:18.071111954 +0000 UTC m=+1236.196515759" Apr 24 14:44:18.085356 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:18.085305 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-5e366-predictor-5878ff6f5d-xxlgz" podStartSLOduration=2.085289415 podStartE2EDuration="2.085289415s" podCreationTimestamp="2026-04-24 14:44:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 14:44:18.082824211 +0000 UTC m=+1236.208228027" watchObservedRunningTime="2026-04-24 14:44:18.085289415 +0000 UTC m=+1236.210693231" Apr 24 14:44:18.937984 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:18.937942 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-0cfc3-predictor-5f76954b8c-4ft79" podUID="223202e5-c32d-4552-a13e-219de7f44444" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.27:8080: connect: connection refused" Apr 24 14:44:18.938204 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:18.937943 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-0cfc3-predictor-54bcbf5466-s6kjh" podUID="a24f9743-ed88-40c5-a792-732789d9aedd" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.26:8080: connect: connection refused" Apr 24 14:44:19.061871 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:19.061830 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-5e366-predictor-5878ff6f5d-xxlgz" podUID="d0f71684-5c0d-4bff-80a8-787d65b1318c" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.28:8080: connect: connection refused" Apr 24 14:44:19.061871 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:19.061855 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-5e366-predictor-6c46fdfff9-ngkdt" podUID="6bad82f6-4e93-448e-ad5f-e78cd014418e" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.29:8080: connect: connection refused" Apr 24 14:44:20.324724 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:20.324704 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-71827-predictor-5df7cd677f-v5rws" Apr 24 14:44:21.023059 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:21.023032 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-71827-predictor-6cd7d94bb6-bvhmq" Apr 24 14:44:21.068782 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:21.068743 2570 generic.go:358] "Generic (PLEG): container finished" podID="2f60c23a-86e2-4674-a671-a867be96e616" containerID="2694928c9a85dee4066ffc11e600b6ae098507fc353a5499a9beae080bd19c8f" exitCode=0 Apr 24 14:44:21.068901 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:21.068803 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-71827-predictor-6cd7d94bb6-bvhmq" Apr 24 14:44:21.068901 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:21.068822 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-71827-predictor-6cd7d94bb6-bvhmq" event={"ID":"2f60c23a-86e2-4674-a671-a867be96e616","Type":"ContainerDied","Data":"2694928c9a85dee4066ffc11e600b6ae098507fc353a5499a9beae080bd19c8f"} Apr 24 14:44:21.068901 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:21.068866 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-71827-predictor-6cd7d94bb6-bvhmq" event={"ID":"2f60c23a-86e2-4674-a671-a867be96e616","Type":"ContainerDied","Data":"b70ad654f97d1ca0c4ce56bf4ae30fcb840c7f67abf281a387da1d4eea61a37b"} Apr 24 14:44:21.068901 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:21.068888 2570 scope.go:117] "RemoveContainer" containerID="2694928c9a85dee4066ffc11e600b6ae098507fc353a5499a9beae080bd19c8f" Apr 24 14:44:21.069894 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:21.069866 2570 generic.go:358] "Generic (PLEG): container finished" podID="6fff380d-f461-478a-9ead-b38373d06339" containerID="46ee565da0adfba7e3177071df26af41bce8f84bb2210c9ed747df7aeddc60d3" exitCode=0 Apr 24 14:44:21.069974 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:21.069906 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-71827-predictor-5df7cd677f-v5rws" event={"ID":"6fff380d-f461-478a-9ead-b38373d06339","Type":"ContainerDied","Data":"46ee565da0adfba7e3177071df26af41bce8f84bb2210c9ed747df7aeddc60d3"} Apr 24 14:44:21.069974 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:21.069922 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-71827-predictor-5df7cd677f-v5rws" Apr 24 14:44:21.069974 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:21.069935 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-71827-predictor-5df7cd677f-v5rws" event={"ID":"6fff380d-f461-478a-9ead-b38373d06339","Type":"ContainerDied","Data":"c4768984c38f6b05f8d661d8d600a2f6434ac247e6122a743df93167c3e77ce1"} Apr 24 14:44:21.076725 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:21.076706 2570 scope.go:117] "RemoveContainer" containerID="2694928c9a85dee4066ffc11e600b6ae098507fc353a5499a9beae080bd19c8f" Apr 24 14:44:21.076992 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:44:21.076975 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2694928c9a85dee4066ffc11e600b6ae098507fc353a5499a9beae080bd19c8f\": container with ID starting with 2694928c9a85dee4066ffc11e600b6ae098507fc353a5499a9beae080bd19c8f not found: ID does not exist" containerID="2694928c9a85dee4066ffc11e600b6ae098507fc353a5499a9beae080bd19c8f" Apr 24 14:44:21.077048 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:21.076998 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2694928c9a85dee4066ffc11e600b6ae098507fc353a5499a9beae080bd19c8f"} err="failed to get container status \"2694928c9a85dee4066ffc11e600b6ae098507fc353a5499a9beae080bd19c8f\": rpc error: code = NotFound desc = could not find container \"2694928c9a85dee4066ffc11e600b6ae098507fc353a5499a9beae080bd19c8f\": container with ID starting with 2694928c9a85dee4066ffc11e600b6ae098507fc353a5499a9beae080bd19c8f not found: ID does not exist" Apr 24 14:44:21.077048 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:21.077014 2570 scope.go:117] "RemoveContainer" containerID="46ee565da0adfba7e3177071df26af41bce8f84bb2210c9ed747df7aeddc60d3" Apr 24 14:44:21.085206 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:21.085184 2570 scope.go:117] "RemoveContainer" containerID="46ee565da0adfba7e3177071df26af41bce8f84bb2210c9ed747df7aeddc60d3" Apr 24 14:44:21.085314 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:21.085268 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-71827-predictor-5df7cd677f-v5rws"] Apr 24 14:44:21.085471 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:44:21.085445 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46ee565da0adfba7e3177071df26af41bce8f84bb2210c9ed747df7aeddc60d3\": container with ID starting with 46ee565da0adfba7e3177071df26af41bce8f84bb2210c9ed747df7aeddc60d3 not found: ID does not exist" containerID="46ee565da0adfba7e3177071df26af41bce8f84bb2210c9ed747df7aeddc60d3" Apr 24 14:44:21.085558 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:21.085480 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46ee565da0adfba7e3177071df26af41bce8f84bb2210c9ed747df7aeddc60d3"} err="failed to get container status \"46ee565da0adfba7e3177071df26af41bce8f84bb2210c9ed747df7aeddc60d3\": rpc error: code = NotFound desc = could not find container \"46ee565da0adfba7e3177071df26af41bce8f84bb2210c9ed747df7aeddc60d3\": container with ID starting with 46ee565da0adfba7e3177071df26af41bce8f84bb2210c9ed747df7aeddc60d3 not found: ID does not exist" Apr 24 14:44:21.087357 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:21.087337 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-71827-predictor-5df7cd677f-v5rws"] Apr 24 14:44:21.095871 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:21.095851 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-71827-predictor-6cd7d94bb6-bvhmq"] Apr 24 14:44:21.101515 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:21.101491 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-71827-predictor-6cd7d94bb6-bvhmq"] Apr 24 14:44:22.428460 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:22.428430 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f60c23a-86e2-4674-a671-a867be96e616" path="/var/lib/kubelet/pods/2f60c23a-86e2-4674-a671-a867be96e616/volumes" Apr 24 14:44:22.428822 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:22.428663 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fff380d-f461-478a-9ead-b38373d06339" path="/var/lib/kubelet/pods/6fff380d-f461-478a-9ead-b38373d06339/volumes" Apr 24 14:44:28.938322 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:28.938244 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-0cfc3-predictor-5f76954b8c-4ft79" Apr 24 14:44:28.938322 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:28.938295 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-0cfc3-predictor-54bcbf5466-s6kjh" Apr 24 14:44:29.061919 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:29.061868 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-5e366-predictor-6c46fdfff9-ngkdt" podUID="6bad82f6-4e93-448e-ad5f-e78cd014418e" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.29:8080: connect: connection refused" Apr 24 14:44:29.062128 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:29.061876 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-5e366-predictor-5878ff6f5d-xxlgz" podUID="d0f71684-5c0d-4bff-80a8-787d65b1318c" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.28:8080: connect: connection refused" Apr 24 14:44:39.063019 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:39.062966 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-5e366-predictor-5878ff6f5d-xxlgz" podUID="d0f71684-5c0d-4bff-80a8-787d65b1318c" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.28:8080: connect: connection refused" Apr 24 14:44:39.063426 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:39.062966 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-5e366-predictor-6c46fdfff9-ngkdt" podUID="6bad82f6-4e93-448e-ad5f-e78cd014418e" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.29:8080: connect: connection refused" Apr 24 14:44:49.062675 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:49.062631 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-5e366-predictor-6c46fdfff9-ngkdt" podUID="6bad82f6-4e93-448e-ad5f-e78cd014418e" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.29:8080: connect: connection refused" Apr 24 14:44:49.063090 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:49.062632 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-5e366-predictor-5878ff6f5d-xxlgz" podUID="d0f71684-5c0d-4bff-80a8-787d65b1318c" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.28:8080: connect: connection refused" Apr 24 14:44:57.239382 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:57.239351 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-47293-predictor-5dfbf9fb9d-n6bpk"] Apr 24 14:44:57.239734 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:57.239645 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2f60c23a-86e2-4674-a671-a867be96e616" containerName="kserve-container" Apr 24 14:44:57.239734 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:57.239655 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f60c23a-86e2-4674-a671-a867be96e616" containerName="kserve-container" Apr 24 14:44:57.239734 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:57.239675 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6fff380d-f461-478a-9ead-b38373d06339" containerName="kserve-container" Apr 24 14:44:57.239734 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:57.239681 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fff380d-f461-478a-9ead-b38373d06339" containerName="kserve-container" Apr 24 14:44:57.239734 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:57.239726 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="2f60c23a-86e2-4674-a671-a867be96e616" containerName="kserve-container" Apr 24 14:44:57.239734 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:57.239734 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="6fff380d-f461-478a-9ead-b38373d06339" containerName="kserve-container" Apr 24 14:44:57.244180 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:57.244156 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-0cfc3-predictor-54bcbf5466-s6kjh"] Apr 24 14:44:57.244298 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:57.244288 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-47293-predictor-5dfbf9fb9d-n6bpk" Apr 24 14:44:57.244390 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:57.244363 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-0cfc3-predictor-54bcbf5466-s6kjh" podUID="a24f9743-ed88-40c5-a792-732789d9aedd" containerName="kserve-container" containerID="cri-o://4f6e5a1c5969a7c348cab7daefef2fdb85303821d0f97fbe5bd5e0f05c76ba65" gracePeriod=30 Apr 24 14:44:57.251582 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:57.251556 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-47293-predictor-5dfbf9fb9d-n6bpk"] Apr 24 14:44:57.254793 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:57.254773 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-47293-predictor-5dfbf9fb9d-n6bpk" Apr 24 14:44:57.323758 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:57.323726 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-47293-predictor-558b7974ff-4dtvs"] Apr 24 14:44:57.328683 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:57.328369 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-47293-predictor-558b7974ff-4dtvs" Apr 24 14:44:57.345329 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:57.342570 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-47293-predictor-558b7974ff-4dtvs" Apr 24 14:44:57.348334 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:57.347180 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-47293-predictor-558b7974ff-4dtvs"] Apr 24 14:44:57.354304 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:57.354194 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-0cfc3-predictor-5f76954b8c-4ft79"] Apr 24 14:44:57.354520 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:57.354470 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-0cfc3-predictor-5f76954b8c-4ft79" podUID="223202e5-c32d-4552-a13e-219de7f44444" containerName="kserve-container" containerID="cri-o://634e1d6d42da3568b7c0e52ff6393ab394507fba1930249daf5192964960af9e" gracePeriod=30 Apr 24 14:44:57.399948 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:57.399781 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-47293-predictor-5dfbf9fb9d-n6bpk"] Apr 24 14:44:57.404263 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:44:57.404230 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0537423_6464_47bd_96a4_68cf178420b9.slice/crio-3a8c8d4843c0cd51c0faf45f0f3b4d9f972b5abc9882dafaab6c7de501a3110c WatchSource:0}: Error finding container 3a8c8d4843c0cd51c0faf45f0f3b4d9f972b5abc9882dafaab6c7de501a3110c: Status 404 returned error can't find the container with id 3a8c8d4843c0cd51c0faf45f0f3b4d9f972b5abc9882dafaab6c7de501a3110c Apr 24 14:44:57.472840 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:57.472806 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-47293-predictor-558b7974ff-4dtvs"] Apr 24 14:44:57.477866 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:44:57.477842 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd05aacc3_8e49_401e_98fe_121e82e9c6c4.slice/crio-bbd99a1329edfa72c538a543f7fc28e26d053f6421db7d7317881447d55ecb10 WatchSource:0}: Error finding container bbd99a1329edfa72c538a543f7fc28e26d053f6421db7d7317881447d55ecb10: Status 404 returned error can't find the container with id bbd99a1329edfa72c538a543f7fc28e26d053f6421db7d7317881447d55ecb10 Apr 24 14:44:58.178244 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:58.178201 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-47293-predictor-5dfbf9fb9d-n6bpk" event={"ID":"a0537423-6464-47bd-96a4-68cf178420b9","Type":"ContainerStarted","Data":"f37f9335fac7d602038ebb34cae6e365633ed11922ca4a4c824ea673a2af6f8b"} Apr 24 14:44:58.178244 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:58.178243 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-47293-predictor-5dfbf9fb9d-n6bpk" event={"ID":"a0537423-6464-47bd-96a4-68cf178420b9","Type":"ContainerStarted","Data":"3a8c8d4843c0cd51c0faf45f0f3b4d9f972b5abc9882dafaab6c7de501a3110c"} Apr 24 14:44:58.178496 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:58.178442 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-47293-predictor-5dfbf9fb9d-n6bpk" Apr 24 14:44:58.179696 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:58.179662 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-47293-predictor-558b7974ff-4dtvs" event={"ID":"d05aacc3-8e49-401e-98fe-121e82e9c6c4","Type":"ContainerStarted","Data":"b7ead89b738458034a4d8f5b719c094bf5f54c746a90c6af92f42604cf84314a"} Apr 24 14:44:58.179696 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:58.179699 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-47293-predictor-558b7974ff-4dtvs" event={"ID":"d05aacc3-8e49-401e-98fe-121e82e9c6c4","Type":"ContainerStarted","Data":"bbd99a1329edfa72c538a543f7fc28e26d053f6421db7d7317881447d55ecb10"} Apr 24 14:44:58.179882 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:58.179843 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-47293-predictor-558b7974ff-4dtvs" Apr 24 14:44:58.179938 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:58.179921 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-47293-predictor-5dfbf9fb9d-n6bpk" podUID="a0537423-6464-47bd-96a4-68cf178420b9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.30:8080: connect: connection refused" Apr 24 14:44:58.180753 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:58.180731 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-47293-predictor-558b7974ff-4dtvs" podUID="d05aacc3-8e49-401e-98fe-121e82e9c6c4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.31:8080: connect: connection refused" Apr 24 14:44:58.193397 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:58.193353 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-47293-predictor-5dfbf9fb9d-n6bpk" podStartSLOduration=1.193339036 podStartE2EDuration="1.193339036s" podCreationTimestamp="2026-04-24 14:44:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 14:44:58.192317671 +0000 UTC m=+1276.317721486" watchObservedRunningTime="2026-04-24 14:44:58.193339036 +0000 UTC m=+1276.318742851" Apr 24 14:44:58.208007 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:58.207962 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/error-404-isvc-47293-predictor-558b7974ff-4dtvs" podStartSLOduration=1.207949859 podStartE2EDuration="1.207949859s" podCreationTimestamp="2026-04-24 14:44:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 14:44:58.207726705 +0000 UTC m=+1276.333130521" watchObservedRunningTime="2026-04-24 14:44:58.207949859 +0000 UTC m=+1276.333353674" Apr 24 14:44:58.937938 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:58.937893 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-0cfc3-predictor-5f76954b8c-4ft79" podUID="223202e5-c32d-4552-a13e-219de7f44444" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.27:8080: connect: connection refused" Apr 24 14:44:58.938390 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:58.937900 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-0cfc3-predictor-54bcbf5466-s6kjh" podUID="a24f9743-ed88-40c5-a792-732789d9aedd" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.26:8080: connect: connection refused" Apr 24 14:44:59.062148 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:59.062086 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-5e366-predictor-6c46fdfff9-ngkdt" podUID="6bad82f6-4e93-448e-ad5f-e78cd014418e" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.29:8080: connect: connection refused" Apr 24 14:44:59.062334 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:59.062086 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-5e366-predictor-5878ff6f5d-xxlgz" podUID="d0f71684-5c0d-4bff-80a8-787d65b1318c" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.28:8080: connect: connection refused" Apr 24 14:44:59.183564 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:59.183525 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-47293-predictor-5dfbf9fb9d-n6bpk" podUID="a0537423-6464-47bd-96a4-68cf178420b9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.30:8080: connect: connection refused" Apr 24 14:44:59.184212 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:44:59.184170 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-47293-predictor-558b7974ff-4dtvs" podUID="d05aacc3-8e49-401e-98fe-121e82e9c6c4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.31:8080: connect: connection refused" Apr 24 14:45:01.192261 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:01.191907 2570 generic.go:358] "Generic (PLEG): container finished" podID="a24f9743-ed88-40c5-a792-732789d9aedd" containerID="4f6e5a1c5969a7c348cab7daefef2fdb85303821d0f97fbe5bd5e0f05c76ba65" exitCode=0 Apr 24 14:45:01.192261 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:01.191982 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-0cfc3-predictor-54bcbf5466-s6kjh" event={"ID":"a24f9743-ed88-40c5-a792-732789d9aedd","Type":"ContainerDied","Data":"4f6e5a1c5969a7c348cab7daefef2fdb85303821d0f97fbe5bd5e0f05c76ba65"} Apr 24 14:45:01.305237 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:01.305211 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-0cfc3-predictor-54bcbf5466-s6kjh" Apr 24 14:45:01.414357 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:01.414335 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-0cfc3-predictor-5f76954b8c-4ft79" Apr 24 14:45:02.195659 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:02.195625 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-0cfc3-predictor-54bcbf5466-s6kjh" Apr 24 14:45:02.196065 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:02.195631 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-0cfc3-predictor-54bcbf5466-s6kjh" event={"ID":"a24f9743-ed88-40c5-a792-732789d9aedd","Type":"ContainerDied","Data":"5bf56d95589d53b8b6360f506368a806cb90b990986937d1857c7c7297f627cd"} Apr 24 14:45:02.196065 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:02.195749 2570 scope.go:117] "RemoveContainer" containerID="4f6e5a1c5969a7c348cab7daefef2fdb85303821d0f97fbe5bd5e0f05c76ba65" Apr 24 14:45:02.196885 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:02.196794 2570 generic.go:358] "Generic (PLEG): container finished" podID="223202e5-c32d-4552-a13e-219de7f44444" containerID="634e1d6d42da3568b7c0e52ff6393ab394507fba1930249daf5192964960af9e" exitCode=0 Apr 24 14:45:02.196885 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:02.196842 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-0cfc3-predictor-5f76954b8c-4ft79" Apr 24 14:45:02.196885 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:02.196845 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-0cfc3-predictor-5f76954b8c-4ft79" event={"ID":"223202e5-c32d-4552-a13e-219de7f44444","Type":"ContainerDied","Data":"634e1d6d42da3568b7c0e52ff6393ab394507fba1930249daf5192964960af9e"} Apr 24 14:45:02.196885 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:02.196870 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-0cfc3-predictor-5f76954b8c-4ft79" event={"ID":"223202e5-c32d-4552-a13e-219de7f44444","Type":"ContainerDied","Data":"2dcba1cdf992a5192f801d509f51719c4ff500b3ad11061ee07fd890e711d70e"} Apr 24 14:45:02.203657 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:02.203636 2570 scope.go:117] "RemoveContainer" containerID="634e1d6d42da3568b7c0e52ff6393ab394507fba1930249daf5192964960af9e" Apr 24 14:45:02.210773 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:02.210756 2570 scope.go:117] "RemoveContainer" containerID="634e1d6d42da3568b7c0e52ff6393ab394507fba1930249daf5192964960af9e" Apr 24 14:45:02.210985 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:45:02.210966 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"634e1d6d42da3568b7c0e52ff6393ab394507fba1930249daf5192964960af9e\": container with ID starting with 634e1d6d42da3568b7c0e52ff6393ab394507fba1930249daf5192964960af9e not found: ID does not exist" containerID="634e1d6d42da3568b7c0e52ff6393ab394507fba1930249daf5192964960af9e" Apr 24 14:45:02.211032 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:02.210993 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"634e1d6d42da3568b7c0e52ff6393ab394507fba1930249daf5192964960af9e"} err="failed to get container status \"634e1d6d42da3568b7c0e52ff6393ab394507fba1930249daf5192964960af9e\": rpc error: code = NotFound desc = could not find container \"634e1d6d42da3568b7c0e52ff6393ab394507fba1930249daf5192964960af9e\": container with ID starting with 634e1d6d42da3568b7c0e52ff6393ab394507fba1930249daf5192964960af9e not found: ID does not exist" Apr 24 14:45:02.218478 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:02.218458 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-0cfc3-predictor-5f76954b8c-4ft79"] Apr 24 14:45:02.221946 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:02.221926 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-0cfc3-predictor-5f76954b8c-4ft79"] Apr 24 14:45:02.231708 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:02.231684 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-0cfc3-predictor-54bcbf5466-s6kjh"] Apr 24 14:45:02.235025 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:02.235001 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-0cfc3-predictor-54bcbf5466-s6kjh"] Apr 24 14:45:02.428672 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:02.428636 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="223202e5-c32d-4552-a13e-219de7f44444" path="/var/lib/kubelet/pods/223202e5-c32d-4552-a13e-219de7f44444/volumes" Apr 24 14:45:02.428879 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:02.428867 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a24f9743-ed88-40c5-a792-732789d9aedd" path="/var/lib/kubelet/pods/a24f9743-ed88-40c5-a792-732789d9aedd/volumes" Apr 24 14:45:09.063960 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:09.063931 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-5e366-predictor-6c46fdfff9-ngkdt" Apr 24 14:45:09.064468 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:09.063990 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-5e366-predictor-5878ff6f5d-xxlgz" Apr 24 14:45:09.183495 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:09.183450 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-47293-predictor-5dfbf9fb9d-n6bpk" podUID="a0537423-6464-47bd-96a4-68cf178420b9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.30:8080: connect: connection refused" Apr 24 14:45:09.183691 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:09.183465 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-47293-predictor-558b7974ff-4dtvs" podUID="d05aacc3-8e49-401e-98fe-121e82e9c6c4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.31:8080: connect: connection refused" Apr 24 14:45:19.183527 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:19.183483 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-47293-predictor-558b7974ff-4dtvs" podUID="d05aacc3-8e49-401e-98fe-121e82e9c6c4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.31:8080: connect: connection refused" Apr 24 14:45:19.184121 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:19.183487 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-47293-predictor-5dfbf9fb9d-n6bpk" podUID="a0537423-6464-47bd-96a4-68cf178420b9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.30:8080: connect: connection refused" Apr 24 14:45:29.184207 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:29.184162 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-47293-predictor-5dfbf9fb9d-n6bpk" podUID="a0537423-6464-47bd-96a4-68cf178420b9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.30:8080: connect: connection refused" Apr 24 14:45:29.184692 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:29.184174 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-47293-predictor-558b7974ff-4dtvs" podUID="d05aacc3-8e49-401e-98fe-121e82e9c6c4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.31:8080: connect: connection refused" Apr 24 14:45:37.211728 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:37.211691 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-5e366-predictor-5878ff6f5d-xxlgz"] Apr 24 14:45:37.212136 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:37.211954 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-5e366-predictor-5878ff6f5d-xxlgz" podUID="d0f71684-5c0d-4bff-80a8-787d65b1318c" containerName="kserve-container" containerID="cri-o://5830a04b666e9666f13a734e4ee27f31766dc507431c88a985eaf20cc8f8cd55" gracePeriod=30 Apr 24 14:45:37.218313 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:37.218288 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-d5b84-predictor-5f6dbb578-r92lm"] Apr 24 14:45:37.218557 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:37.218547 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a24f9743-ed88-40c5-a792-732789d9aedd" containerName="kserve-container" Apr 24 14:45:37.218599 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:37.218559 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="a24f9743-ed88-40c5-a792-732789d9aedd" containerName="kserve-container" Apr 24 14:45:37.218599 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:37.218568 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="223202e5-c32d-4552-a13e-219de7f44444" containerName="kserve-container" Apr 24 14:45:37.218599 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:37.218574 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="223202e5-c32d-4552-a13e-219de7f44444" containerName="kserve-container" Apr 24 14:45:37.218696 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:37.218629 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="a24f9743-ed88-40c5-a792-732789d9aedd" containerName="kserve-container" Apr 24 14:45:37.218696 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:37.218637 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="223202e5-c32d-4552-a13e-219de7f44444" containerName="kserve-container" Apr 24 14:45:37.221376 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:37.221361 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-d5b84-predictor-5f6dbb578-r92lm" Apr 24 14:45:37.230589 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:37.230566 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-d5b84-predictor-5f6dbb578-r92lm"] Apr 24 14:45:37.230771 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:37.230755 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-d5b84-predictor-5f6dbb578-r92lm" Apr 24 14:45:37.296989 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:37.296954 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-d5b84-predictor-585896bd67-xqbm5"] Apr 24 14:45:37.301660 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:37.301609 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-d5b84-predictor-585896bd67-xqbm5" Apr 24 14:45:37.303954 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:37.303927 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-5e366-predictor-6c46fdfff9-ngkdt"] Apr 24 14:45:37.304219 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:37.304194 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-5e366-predictor-6c46fdfff9-ngkdt" podUID="6bad82f6-4e93-448e-ad5f-e78cd014418e" containerName="kserve-container" containerID="cri-o://51167b21e110b3a9ecaa3b65557f9740807102e177584c51f7ca92ae1d170212" gracePeriod=30 Apr 24 14:45:37.307991 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:37.307965 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-d5b84-predictor-585896bd67-xqbm5"] Apr 24 14:45:37.314115 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:37.314086 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-d5b84-predictor-585896bd67-xqbm5" Apr 24 14:45:37.368164 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:37.367953 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-d5b84-predictor-5f6dbb578-r92lm"] Apr 24 14:45:37.371661 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:45:37.371620 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5eff5b6_5feb_4514_b30b_6460063dd3b4.slice/crio-b9281461adfd83af2cd89c858a839afcbb56cf50b7b89efd2af321e92adb4e29 WatchSource:0}: Error finding container b9281461adfd83af2cd89c858a839afcbb56cf50b7b89efd2af321e92adb4e29: Status 404 returned error can't find the container with id b9281461adfd83af2cd89c858a839afcbb56cf50b7b89efd2af321e92adb4e29 Apr 24 14:45:37.446820 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:37.446791 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-d5b84-predictor-585896bd67-xqbm5"] Apr 24 14:45:37.449668 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:45:37.449638 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87e707d2_09bc_4786_b70b_9f3ed3913cd2.slice/crio-0bd730dde4833b3a50eaf855ef65bcfa1c6b1dd859ae3cafb4176d59d1eac46b WatchSource:0}: Error finding container 0bd730dde4833b3a50eaf855ef65bcfa1c6b1dd859ae3cafb4176d59d1eac46b: Status 404 returned error can't find the container with id 0bd730dde4833b3a50eaf855ef65bcfa1c6b1dd859ae3cafb4176d59d1eac46b Apr 24 14:45:38.308131 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:38.308081 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-d5b84-predictor-5f6dbb578-r92lm" event={"ID":"e5eff5b6-5feb-4514-b30b-6460063dd3b4","Type":"ContainerStarted","Data":"7b85a94ef2a2b6671db28ff0f520a75e6f8b4c33e473ef6e5e7a53e838951251"} Apr 24 14:45:38.308131 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:38.308133 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-d5b84-predictor-5f6dbb578-r92lm" event={"ID":"e5eff5b6-5feb-4514-b30b-6460063dd3b4","Type":"ContainerStarted","Data":"b9281461adfd83af2cd89c858a839afcbb56cf50b7b89efd2af321e92adb4e29"} Apr 24 14:45:38.308604 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:38.308273 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-d5b84-predictor-5f6dbb578-r92lm" Apr 24 14:45:38.309490 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:38.309463 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-d5b84-predictor-585896bd67-xqbm5" event={"ID":"87e707d2-09bc-4786-b70b-9f3ed3913cd2","Type":"ContainerStarted","Data":"331c2cea28cb6520d91a4d24a070c0e30929cb920dc879180e4b9d567dd4a0fe"} Apr 24 14:45:38.309490 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:38.309492 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-d5b84-predictor-585896bd67-xqbm5" event={"ID":"87e707d2-09bc-4786-b70b-9f3ed3913cd2","Type":"ContainerStarted","Data":"0bd730dde4833b3a50eaf855ef65bcfa1c6b1dd859ae3cafb4176d59d1eac46b"} Apr 24 14:45:38.309697 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:38.309676 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-d5b84-predictor-585896bd67-xqbm5" Apr 24 14:45:38.309761 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:38.309743 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-d5b84-predictor-5f6dbb578-r92lm" podUID="e5eff5b6-5feb-4514-b30b-6460063dd3b4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.32:8080: connect: connection refused" Apr 24 14:45:38.310408 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:38.310388 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-d5b84-predictor-585896bd67-xqbm5" podUID="87e707d2-09bc-4786-b70b-9f3ed3913cd2" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.33:8080: connect: connection refused" Apr 24 14:45:38.322647 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:38.322611 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-d5b84-predictor-5f6dbb578-r92lm" podStartSLOduration=1.322597404 podStartE2EDuration="1.322597404s" podCreationTimestamp="2026-04-24 14:45:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 14:45:38.321633506 +0000 UTC m=+1316.447037318" watchObservedRunningTime="2026-04-24 14:45:38.322597404 +0000 UTC m=+1316.448001219" Apr 24 14:45:38.335351 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:38.335314 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/error-404-isvc-d5b84-predictor-585896bd67-xqbm5" podStartSLOduration=1.335303685 podStartE2EDuration="1.335303685s" podCreationTimestamp="2026-04-24 14:45:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 14:45:38.333882197 +0000 UTC m=+1316.459286011" watchObservedRunningTime="2026-04-24 14:45:38.335303685 +0000 UTC m=+1316.460707569" Apr 24 14:45:39.062028 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:39.061984 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-5e366-predictor-6c46fdfff9-ngkdt" podUID="6bad82f6-4e93-448e-ad5f-e78cd014418e" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.29:8080: connect: connection refused" Apr 24 14:45:39.062220 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:39.061984 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-5e366-predictor-5878ff6f5d-xxlgz" podUID="d0f71684-5c0d-4bff-80a8-787d65b1318c" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.28:8080: connect: connection refused" Apr 24 14:45:39.184278 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:39.184231 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-47293-predictor-558b7974ff-4dtvs" podUID="d05aacc3-8e49-401e-98fe-121e82e9c6c4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.31:8080: connect: connection refused" Apr 24 14:45:39.184499 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:39.184231 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-47293-predictor-5dfbf9fb9d-n6bpk" podUID="a0537423-6464-47bd-96a4-68cf178420b9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.30:8080: connect: connection refused" Apr 24 14:45:39.312130 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:39.312013 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-d5b84-predictor-5f6dbb578-r92lm" podUID="e5eff5b6-5feb-4514-b30b-6460063dd3b4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.32:8080: connect: connection refused" Apr 24 14:45:39.312497 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:39.312165 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-d5b84-predictor-585896bd67-xqbm5" podUID="87e707d2-09bc-4786-b70b-9f3ed3913cd2" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.33:8080: connect: connection refused" Apr 24 14:45:41.318968 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:41.318938 2570 generic.go:358] "Generic (PLEG): container finished" podID="6bad82f6-4e93-448e-ad5f-e78cd014418e" containerID="51167b21e110b3a9ecaa3b65557f9740807102e177584c51f7ca92ae1d170212" exitCode=0 Apr 24 14:45:41.319336 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:41.319012 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-5e366-predictor-6c46fdfff9-ngkdt" event={"ID":"6bad82f6-4e93-448e-ad5f-e78cd014418e","Type":"ContainerDied","Data":"51167b21e110b3a9ecaa3b65557f9740807102e177584c51f7ca92ae1d170212"} Apr 24 14:45:41.340337 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:41.340316 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-5e366-predictor-6c46fdfff9-ngkdt" Apr 24 14:45:42.325548 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:42.325513 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-5e366-predictor-6c46fdfff9-ngkdt" event={"ID":"6bad82f6-4e93-448e-ad5f-e78cd014418e","Type":"ContainerDied","Data":"0a9f2bd9ef2aa36f94fc0a5d62580ccf1bc36d781bfb1e174cf1f66dd9f40c00"} Apr 24 14:45:42.325958 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:42.325559 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-5e366-predictor-6c46fdfff9-ngkdt" Apr 24 14:45:42.325958 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:42.325559 2570 scope.go:117] "RemoveContainer" containerID="51167b21e110b3a9ecaa3b65557f9740807102e177584c51f7ca92ae1d170212" Apr 24 14:45:42.346628 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:42.346601 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-5e366-predictor-6c46fdfff9-ngkdt"] Apr 24 14:45:42.349608 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:42.349589 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-5e366-predictor-6c46fdfff9-ngkdt"] Apr 24 14:45:42.428563 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:42.428527 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bad82f6-4e93-448e-ad5f-e78cd014418e" path="/var/lib/kubelet/pods/6bad82f6-4e93-448e-ad5f-e78cd014418e/volumes" Apr 24 14:45:44.332665 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:44.332630 2570 generic.go:358] "Generic (PLEG): container finished" podID="d0f71684-5c0d-4bff-80a8-787d65b1318c" containerID="5830a04b666e9666f13a734e4ee27f31766dc507431c88a985eaf20cc8f8cd55" exitCode=0 Apr 24 14:45:44.332979 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:44.332705 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-5e366-predictor-5878ff6f5d-xxlgz" event={"ID":"d0f71684-5c0d-4bff-80a8-787d65b1318c","Type":"ContainerDied","Data":"5830a04b666e9666f13a734e4ee27f31766dc507431c88a985eaf20cc8f8cd55"} Apr 24 14:45:44.452696 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:44.452673 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-5e366-predictor-5878ff6f5d-xxlgz" Apr 24 14:45:45.336354 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:45.336325 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-5e366-predictor-5878ff6f5d-xxlgz" Apr 24 14:45:45.336776 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:45.336325 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-5e366-predictor-5878ff6f5d-xxlgz" event={"ID":"d0f71684-5c0d-4bff-80a8-787d65b1318c","Type":"ContainerDied","Data":"fc1352b762935e75db7a9c6b0579cf30b75572f1da45700ad3ae19f871d9918b"} Apr 24 14:45:45.336776 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:45.336439 2570 scope.go:117] "RemoveContainer" containerID="5830a04b666e9666f13a734e4ee27f31766dc507431c88a985eaf20cc8f8cd55" Apr 24 14:45:45.356546 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:45.356521 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-5e366-predictor-5878ff6f5d-xxlgz"] Apr 24 14:45:45.359835 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:45.359814 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-5e366-predictor-5878ff6f5d-xxlgz"] Apr 24 14:45:46.427964 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:46.427929 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0f71684-5c0d-4bff-80a8-787d65b1318c" path="/var/lib/kubelet/pods/d0f71684-5c0d-4bff-80a8-787d65b1318c/volumes" Apr 24 14:45:49.184611 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:49.184574 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-47293-predictor-558b7974ff-4dtvs" Apr 24 14:45:49.184999 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:49.184982 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-47293-predictor-5dfbf9fb9d-n6bpk" Apr 24 14:45:49.312498 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:49.312456 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-d5b84-predictor-585896bd67-xqbm5" podUID="87e707d2-09bc-4786-b70b-9f3ed3913cd2" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.33:8080: connect: connection refused" Apr 24 14:45:49.312683 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:49.312461 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-d5b84-predictor-5f6dbb578-r92lm" podUID="e5eff5b6-5feb-4514-b30b-6460063dd3b4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.32:8080: connect: connection refused" Apr 24 14:45:59.312197 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:59.312076 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-d5b84-predictor-5f6dbb578-r92lm" podUID="e5eff5b6-5feb-4514-b30b-6460063dd3b4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.32:8080: connect: connection refused" Apr 24 14:45:59.312659 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:45:59.312076 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-d5b84-predictor-585896bd67-xqbm5" podUID="87e707d2-09bc-4786-b70b-9f3ed3913cd2" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.33:8080: connect: connection refused" Apr 24 14:46:09.313024 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:46:09.312980 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-d5b84-predictor-585896bd67-xqbm5" podUID="87e707d2-09bc-4786-b70b-9f3ed3913cd2" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.33:8080: connect: connection refused" Apr 24 14:46:09.313526 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:46:09.312985 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-d5b84-predictor-5f6dbb578-r92lm" podUID="e5eff5b6-5feb-4514-b30b-6460063dd3b4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.32:8080: connect: connection refused" Apr 24 14:46:19.313086 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:46:19.313033 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-d5b84-predictor-585896bd67-xqbm5" podUID="87e707d2-09bc-4786-b70b-9f3ed3913cd2" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.33:8080: connect: connection refused" Apr 24 14:46:19.313565 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:46:19.313033 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-d5b84-predictor-5f6dbb578-r92lm" podUID="e5eff5b6-5feb-4514-b30b-6460063dd3b4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.32:8080: connect: connection refused" Apr 24 14:46:29.313067 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:46:29.313022 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-d5b84-predictor-585896bd67-xqbm5" Apr 24 14:46:29.313577 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:46:29.313343 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-d5b84-predictor-5f6dbb578-r92lm" Apr 24 14:48:42.419436 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:48:42.419399 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dz989_090e3afb-c111-4bf0-a107-0156c2f3a0f2/ovn-acl-logging/0.log" Apr 24 14:48:42.421883 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:48:42.421133 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dz989_090e3afb-c111-4bf0-a107-0156c2f3a0f2/ovn-acl-logging/0.log" Apr 24 14:53:42.437528 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:53:42.437496 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dz989_090e3afb-c111-4bf0-a107-0156c2f3a0f2/ovn-acl-logging/0.log" Apr 24 14:53:42.440362 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:53:42.440344 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dz989_090e3afb-c111-4bf0-a107-0156c2f3a0f2/ovn-acl-logging/0.log" Apr 24 14:54:22.128930 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:22.128895 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-47293-predictor-5dfbf9fb9d-n6bpk"] Apr 24 14:54:22.129537 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:22.129146 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-47293-predictor-5dfbf9fb9d-n6bpk" podUID="a0537423-6464-47bd-96a4-68cf178420b9" containerName="kserve-container" containerID="cri-o://f37f9335fac7d602038ebb34cae6e365633ed11922ca4a4c824ea673a2af6f8b" gracePeriod=30 Apr 24 14:54:22.174612 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:22.174577 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-b7286-predictor-55b765f954-w76bk"] Apr 24 14:54:22.174884 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:22.174872 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6bad82f6-4e93-448e-ad5f-e78cd014418e" containerName="kserve-container" Apr 24 14:54:22.174938 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:22.174885 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bad82f6-4e93-448e-ad5f-e78cd014418e" containerName="kserve-container" Apr 24 14:54:22.174938 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:22.174906 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d0f71684-5c0d-4bff-80a8-787d65b1318c" containerName="kserve-container" Apr 24 14:54:22.174938 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:22.174913 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0f71684-5c0d-4bff-80a8-787d65b1318c" containerName="kserve-container" Apr 24 14:54:22.175040 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:22.174955 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="6bad82f6-4e93-448e-ad5f-e78cd014418e" containerName="kserve-container" Apr 24 14:54:22.175040 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:22.174962 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="d0f71684-5c0d-4bff-80a8-787d65b1318c" containerName="kserve-container" Apr 24 14:54:22.177744 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:22.177727 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-b7286-predictor-55b765f954-w76bk" Apr 24 14:54:22.188974 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:22.188939 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-b7286-predictor-55b765f954-w76bk" Apr 24 14:54:22.191020 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:22.190982 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-b7286-predictor-55b765f954-w76bk"] Apr 24 14:54:22.213123 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:22.213073 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-47293-predictor-558b7974ff-4dtvs"] Apr 24 14:54:22.213379 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:22.213353 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-47293-predictor-558b7974ff-4dtvs" podUID="d05aacc3-8e49-401e-98fe-121e82e9c6c4" containerName="kserve-container" containerID="cri-o://b7ead89b738458034a4d8f5b719c094bf5f54c746a90c6af92f42604cf84314a" gracePeriod=30 Apr 24 14:54:22.229920 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:22.229888 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-b7286-predictor-7759f757c9-grvmd"] Apr 24 14:54:22.233629 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:22.233602 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-b7286-predictor-7759f757c9-grvmd" Apr 24 14:54:22.239576 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:22.239504 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-b7286-predictor-7759f757c9-grvmd"] Apr 24 14:54:22.249148 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:22.249127 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-b7286-predictor-7759f757c9-grvmd" Apr 24 14:54:22.329015 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:22.328983 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-b7286-predictor-55b765f954-w76bk"] Apr 24 14:54:22.334069 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:54:22.333947 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0b6ad9f_812f_409d_ad14_b998849981f7.slice/crio-9475f18882953de074a0992b0fb2b9248ecb7dddc703d1a289263f28acaf59e0 WatchSource:0}: Error finding container 9475f18882953de074a0992b0fb2b9248ecb7dddc703d1a289263f28acaf59e0: Status 404 returned error can't find the container with id 9475f18882953de074a0992b0fb2b9248ecb7dddc703d1a289263f28acaf59e0 Apr 24 14:54:22.340716 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:22.339307 2570 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 24 14:54:22.379750 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:22.379679 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-b7286-predictor-7759f757c9-grvmd"] Apr 24 14:54:22.382339 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:54:22.382304 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d7873a8_aee4_4b1b_8d62_336d952f3d15.slice/crio-9f439a4c1f0a4296c9d10d7bfe925f388fe5b532ffd0390ab3c08172b7904cf2 WatchSource:0}: Error finding container 9f439a4c1f0a4296c9d10d7bfe925f388fe5b532ffd0390ab3c08172b7904cf2: Status 404 returned error can't find the container with id 9f439a4c1f0a4296c9d10d7bfe925f388fe5b532ffd0390ab3c08172b7904cf2 Apr 24 14:54:22.766768 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:22.766726 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-b7286-predictor-7759f757c9-grvmd" event={"ID":"5d7873a8-aee4-4b1b-8d62-336d952f3d15","Type":"ContainerStarted","Data":"4a44f2c30d8b217569c7eebf148ec52647ff16cba3ab716a3cdfd585620f7986"} Apr 24 14:54:22.766768 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:22.766773 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-b7286-predictor-7759f757c9-grvmd" event={"ID":"5d7873a8-aee4-4b1b-8d62-336d952f3d15","Type":"ContainerStarted","Data":"9f439a4c1f0a4296c9d10d7bfe925f388fe5b532ffd0390ab3c08172b7904cf2"} Apr 24 14:54:22.767026 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:22.766907 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-b7286-predictor-7759f757c9-grvmd" Apr 24 14:54:22.768028 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:22.767995 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-b7286-predictor-55b765f954-w76bk" event={"ID":"d0b6ad9f-812f-409d-ad14-b998849981f7","Type":"ContainerStarted","Data":"3d99952639221d8325e25d83e0b0d1fac82ed9c20b8327223a7f0ca591672ce6"} Apr 24 14:54:22.768161 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:22.768035 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-b7286-predictor-55b765f954-w76bk" event={"ID":"d0b6ad9f-812f-409d-ad14-b998849981f7","Type":"ContainerStarted","Data":"9475f18882953de074a0992b0fb2b9248ecb7dddc703d1a289263f28acaf59e0"} Apr 24 14:54:22.768263 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:22.768243 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-b7286-predictor-55b765f954-w76bk" Apr 24 14:54:22.768486 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:22.768456 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-b7286-predictor-7759f757c9-grvmd" podUID="5d7873a8-aee4-4b1b-8d62-336d952f3d15" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.35:8080: connect: connection refused" Apr 24 14:54:22.769244 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:22.769222 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-b7286-predictor-55b765f954-w76bk" podUID="d0b6ad9f-812f-409d-ad14-b998849981f7" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.34:8080: connect: connection refused" Apr 24 14:54:22.781404 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:22.781368 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/error-404-isvc-b7286-predictor-7759f757c9-grvmd" podStartSLOduration=0.781357421 podStartE2EDuration="781.357421ms" podCreationTimestamp="2026-04-24 14:54:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 14:54:22.780426112 +0000 UTC m=+1840.905829929" watchObservedRunningTime="2026-04-24 14:54:22.781357421 +0000 UTC m=+1840.906761236" Apr 24 14:54:22.794542 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:22.794503 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-b7286-predictor-55b765f954-w76bk" podStartSLOduration=0.794492522 podStartE2EDuration="794.492522ms" podCreationTimestamp="2026-04-24 14:54:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 14:54:22.79297559 +0000 UTC m=+1840.918379405" watchObservedRunningTime="2026-04-24 14:54:22.794492522 +0000 UTC m=+1840.919896424" Apr 24 14:54:23.771142 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:23.771089 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-b7286-predictor-55b765f954-w76bk" podUID="d0b6ad9f-812f-409d-ad14-b998849981f7" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.34:8080: connect: connection refused" Apr 24 14:54:23.771501 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:23.771156 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-b7286-predictor-7759f757c9-grvmd" podUID="5d7873a8-aee4-4b1b-8d62-336d952f3d15" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.35:8080: connect: connection refused" Apr 24 14:54:25.555199 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:25.555175 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-47293-predictor-558b7974ff-4dtvs" Apr 24 14:54:25.777089 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:25.777045 2570 generic.go:358] "Generic (PLEG): container finished" podID="d05aacc3-8e49-401e-98fe-121e82e9c6c4" containerID="b7ead89b738458034a4d8f5b719c094bf5f54c746a90c6af92f42604cf84314a" exitCode=0 Apr 24 14:54:25.777280 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:25.777136 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-47293-predictor-558b7974ff-4dtvs" Apr 24 14:54:25.777280 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:25.777131 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-47293-predictor-558b7974ff-4dtvs" event={"ID":"d05aacc3-8e49-401e-98fe-121e82e9c6c4","Type":"ContainerDied","Data":"b7ead89b738458034a4d8f5b719c094bf5f54c746a90c6af92f42604cf84314a"} Apr 24 14:54:25.777280 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:25.777239 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-47293-predictor-558b7974ff-4dtvs" event={"ID":"d05aacc3-8e49-401e-98fe-121e82e9c6c4","Type":"ContainerDied","Data":"bbd99a1329edfa72c538a543f7fc28e26d053f6421db7d7317881447d55ecb10"} Apr 24 14:54:25.777280 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:25.777255 2570 scope.go:117] "RemoveContainer" containerID="b7ead89b738458034a4d8f5b719c094bf5f54c746a90c6af92f42604cf84314a" Apr 24 14:54:25.785563 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:25.785542 2570 scope.go:117] "RemoveContainer" containerID="b7ead89b738458034a4d8f5b719c094bf5f54c746a90c6af92f42604cf84314a" Apr 24 14:54:25.785904 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:54:25.785840 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7ead89b738458034a4d8f5b719c094bf5f54c746a90c6af92f42604cf84314a\": container with ID starting with b7ead89b738458034a4d8f5b719c094bf5f54c746a90c6af92f42604cf84314a not found: ID does not exist" containerID="b7ead89b738458034a4d8f5b719c094bf5f54c746a90c6af92f42604cf84314a" Apr 24 14:54:25.785972 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:25.785897 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7ead89b738458034a4d8f5b719c094bf5f54c746a90c6af92f42604cf84314a"} err="failed to get container status \"b7ead89b738458034a4d8f5b719c094bf5f54c746a90c6af92f42604cf84314a\": rpc error: code = NotFound desc = could not find container \"b7ead89b738458034a4d8f5b719c094bf5f54c746a90c6af92f42604cf84314a\": container with ID starting with b7ead89b738458034a4d8f5b719c094bf5f54c746a90c6af92f42604cf84314a not found: ID does not exist" Apr 24 14:54:25.798538 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:25.798512 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-47293-predictor-558b7974ff-4dtvs"] Apr 24 14:54:25.801665 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:25.801643 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-47293-predictor-558b7974ff-4dtvs"] Apr 24 14:54:26.168327 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:26.168307 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-47293-predictor-5dfbf9fb9d-n6bpk" Apr 24 14:54:26.428158 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:26.428120 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d05aacc3-8e49-401e-98fe-121e82e9c6c4" path="/var/lib/kubelet/pods/d05aacc3-8e49-401e-98fe-121e82e9c6c4/volumes" Apr 24 14:54:26.781429 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:26.781346 2570 generic.go:358] "Generic (PLEG): container finished" podID="a0537423-6464-47bd-96a4-68cf178420b9" containerID="f37f9335fac7d602038ebb34cae6e365633ed11922ca4a4c824ea673a2af6f8b" exitCode=0 Apr 24 14:54:26.781429 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:26.781396 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-47293-predictor-5dfbf9fb9d-n6bpk" event={"ID":"a0537423-6464-47bd-96a4-68cf178420b9","Type":"ContainerDied","Data":"f37f9335fac7d602038ebb34cae6e365633ed11922ca4a4c824ea673a2af6f8b"} Apr 24 14:54:26.781429 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:26.781408 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-47293-predictor-5dfbf9fb9d-n6bpk" Apr 24 14:54:26.781429 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:26.781429 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-47293-predictor-5dfbf9fb9d-n6bpk" event={"ID":"a0537423-6464-47bd-96a4-68cf178420b9","Type":"ContainerDied","Data":"3a8c8d4843c0cd51c0faf45f0f3b4d9f972b5abc9882dafaab6c7de501a3110c"} Apr 24 14:54:26.781958 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:26.781449 2570 scope.go:117] "RemoveContainer" containerID="f37f9335fac7d602038ebb34cae6e365633ed11922ca4a4c824ea673a2af6f8b" Apr 24 14:54:26.790383 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:26.790360 2570 scope.go:117] "RemoveContainer" containerID="f37f9335fac7d602038ebb34cae6e365633ed11922ca4a4c824ea673a2af6f8b" Apr 24 14:54:26.790634 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:54:26.790611 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f37f9335fac7d602038ebb34cae6e365633ed11922ca4a4c824ea673a2af6f8b\": container with ID starting with f37f9335fac7d602038ebb34cae6e365633ed11922ca4a4c824ea673a2af6f8b not found: ID does not exist" containerID="f37f9335fac7d602038ebb34cae6e365633ed11922ca4a4c824ea673a2af6f8b" Apr 24 14:54:26.790728 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:26.790640 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f37f9335fac7d602038ebb34cae6e365633ed11922ca4a4c824ea673a2af6f8b"} err="failed to get container status \"f37f9335fac7d602038ebb34cae6e365633ed11922ca4a4c824ea673a2af6f8b\": rpc error: code = NotFound desc = could not find container \"f37f9335fac7d602038ebb34cae6e365633ed11922ca4a4c824ea673a2af6f8b\": container with ID starting with f37f9335fac7d602038ebb34cae6e365633ed11922ca4a4c824ea673a2af6f8b not found: ID does not exist" Apr 24 14:54:26.796736 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:26.796712 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-47293-predictor-5dfbf9fb9d-n6bpk"] Apr 24 14:54:26.798903 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:26.798882 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-47293-predictor-5dfbf9fb9d-n6bpk"] Apr 24 14:54:28.428360 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:28.428323 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0537423-6464-47bd-96a4-68cf178420b9" path="/var/lib/kubelet/pods/a0537423-6464-47bd-96a4-68cf178420b9/volumes" Apr 24 14:54:33.771561 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:33.771502 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-b7286-predictor-7759f757c9-grvmd" podUID="5d7873a8-aee4-4b1b-8d62-336d952f3d15" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.35:8080: connect: connection refused" Apr 24 14:54:33.771938 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:33.771509 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-b7286-predictor-55b765f954-w76bk" podUID="d0b6ad9f-812f-409d-ad14-b998849981f7" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.34:8080: connect: connection refused" Apr 24 14:54:43.772206 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:43.772161 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-b7286-predictor-7759f757c9-grvmd" podUID="5d7873a8-aee4-4b1b-8d62-336d952f3d15" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.35:8080: connect: connection refused" Apr 24 14:54:43.772581 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:43.772161 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-b7286-predictor-55b765f954-w76bk" podUID="d0b6ad9f-812f-409d-ad14-b998849981f7" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.34:8080: connect: connection refused" Apr 24 14:54:53.771967 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:53.771928 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-b7286-predictor-7759f757c9-grvmd" podUID="5d7873a8-aee4-4b1b-8d62-336d952f3d15" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.35:8080: connect: connection refused" Apr 24 14:54:53.772379 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:54:53.771932 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-b7286-predictor-55b765f954-w76bk" podUID="d0b6ad9f-812f-409d-ad14-b998849981f7" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.34:8080: connect: connection refused" Apr 24 14:55:02.100457 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:02.100376 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-d5b84-predictor-5f6dbb578-r92lm"] Apr 24 14:55:02.100898 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:02.100584 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-d5b84-predictor-5f6dbb578-r92lm" podUID="e5eff5b6-5feb-4514-b30b-6460063dd3b4" containerName="kserve-container" containerID="cri-o://7b85a94ef2a2b6671db28ff0f520a75e6f8b4c33e473ef6e5e7a53e838951251" gracePeriod=30 Apr 24 14:55:02.114843 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:02.114820 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-67b80-predictor-5856fd8c6b-q6tmf"] Apr 24 14:55:02.115090 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:02.115080 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d05aacc3-8e49-401e-98fe-121e82e9c6c4" containerName="kserve-container" Apr 24 14:55:02.115157 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:02.115092 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="d05aacc3-8e49-401e-98fe-121e82e9c6c4" containerName="kserve-container" Apr 24 14:55:02.115157 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:02.115120 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a0537423-6464-47bd-96a4-68cf178420b9" containerName="kserve-container" Apr 24 14:55:02.115157 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:02.115127 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0537423-6464-47bd-96a4-68cf178420b9" containerName="kserve-container" Apr 24 14:55:02.115247 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:02.115174 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="a0537423-6464-47bd-96a4-68cf178420b9" containerName="kserve-container" Apr 24 14:55:02.115247 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:02.115183 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="d05aacc3-8e49-401e-98fe-121e82e9c6c4" containerName="kserve-container" Apr 24 14:55:02.119157 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:02.119142 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-67b80-predictor-5856fd8c6b-q6tmf" Apr 24 14:55:02.124067 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:02.124040 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-67b80-predictor-5856fd8c6b-q6tmf"] Apr 24 14:55:02.129459 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:02.129436 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-67b80-predictor-5856fd8c6b-q6tmf" Apr 24 14:55:02.171361 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:02.171328 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-d5b84-predictor-585896bd67-xqbm5"] Apr 24 14:55:02.171620 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:02.171595 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-d5b84-predictor-585896bd67-xqbm5" podUID="87e707d2-09bc-4786-b70b-9f3ed3913cd2" containerName="kserve-container" containerID="cri-o://331c2cea28cb6520d91a4d24a070c0e30929cb920dc879180e4b9d567dd4a0fe" gracePeriod=30 Apr 24 14:55:02.189917 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:02.189863 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-67b80-predictor-868fcc489f-swmll"] Apr 24 14:55:02.194776 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:02.194748 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-67b80-predictor-868fcc489f-swmll" Apr 24 14:55:02.199075 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:02.199001 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-67b80-predictor-868fcc489f-swmll"] Apr 24 14:55:02.207524 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:02.207370 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-67b80-predictor-868fcc489f-swmll" Apr 24 14:55:02.264148 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:02.264086 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-67b80-predictor-5856fd8c6b-q6tmf"] Apr 24 14:55:02.339852 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:02.339802 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-67b80-predictor-868fcc489f-swmll"] Apr 24 14:55:02.342488 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:55:02.342459 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf73fe81_f2ac_4357_b42c_a9e052948498.slice/crio-36dbeb0a98f4255512c52162f01eae41a48e9a5f01faaaf0ed9f54e2756e9fe7 WatchSource:0}: Error finding container 36dbeb0a98f4255512c52162f01eae41a48e9a5f01faaaf0ed9f54e2756e9fe7: Status 404 returned error can't find the container with id 36dbeb0a98f4255512c52162f01eae41a48e9a5f01faaaf0ed9f54e2756e9fe7 Apr 24 14:55:02.885463 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:02.885419 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-67b80-predictor-868fcc489f-swmll" event={"ID":"bf73fe81-f2ac-4357-b42c-a9e052948498","Type":"ContainerStarted","Data":"9fb98c0d943626acf7f0271434f2e85034a8e4c969c0a8fa3bb4025cb542187a"} Apr 24 14:55:02.885463 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:02.885465 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-67b80-predictor-868fcc489f-swmll" event={"ID":"bf73fe81-f2ac-4357-b42c-a9e052948498","Type":"ContainerStarted","Data":"36dbeb0a98f4255512c52162f01eae41a48e9a5f01faaaf0ed9f54e2756e9fe7"} Apr 24 14:55:02.886414 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:02.886355 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-67b80-predictor-868fcc489f-swmll" Apr 24 14:55:02.887519 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:02.887489 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-67b80-predictor-868fcc489f-swmll" podUID="bf73fe81-f2ac-4357-b42c-a9e052948498" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.37:8080: connect: connection refused" Apr 24 14:55:02.888351 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:02.888326 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-67b80-predictor-5856fd8c6b-q6tmf" event={"ID":"9696db27-0f6d-450e-926c-c66c4ad9ab37","Type":"ContainerStarted","Data":"d3a2b33c2f4602855da66be079cd20af652d402b35661aec19d752ba5acd843f"} Apr 24 14:55:02.888468 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:02.888357 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-67b80-predictor-5856fd8c6b-q6tmf" event={"ID":"9696db27-0f6d-450e-926c-c66c4ad9ab37","Type":"ContainerStarted","Data":"c33801de900602dceec33e29bb39116f2fd516fb8c8bdb81770c405104f79bcb"} Apr 24 14:55:02.889654 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:02.888898 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-67b80-predictor-5856fd8c6b-q6tmf" Apr 24 14:55:02.889900 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:02.889841 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-67b80-predictor-5856fd8c6b-q6tmf" podUID="9696db27-0f6d-450e-926c-c66c4ad9ab37" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.36:8080: connect: connection refused" Apr 24 14:55:02.900125 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:02.900069 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/error-404-isvc-67b80-predictor-868fcc489f-swmll" podStartSLOduration=0.900058671 podStartE2EDuration="900.058671ms" podCreationTimestamp="2026-04-24 14:55:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 14:55:02.899420514 +0000 UTC m=+1881.024824328" watchObservedRunningTime="2026-04-24 14:55:02.900058671 +0000 UTC m=+1881.025462485" Apr 24 14:55:02.913456 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:02.913414 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-67b80-predictor-5856fd8c6b-q6tmf" podStartSLOduration=0.913402563 podStartE2EDuration="913.402563ms" podCreationTimestamp="2026-04-24 14:55:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 14:55:02.912677048 +0000 UTC m=+1881.038080876" watchObservedRunningTime="2026-04-24 14:55:02.913402563 +0000 UTC m=+1881.038806378" Apr 24 14:55:03.771358 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:03.771316 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-b7286-predictor-7759f757c9-grvmd" podUID="5d7873a8-aee4-4b1b-8d62-336d952f3d15" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.35:8080: connect: connection refused" Apr 24 14:55:03.771722 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:03.771317 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-b7286-predictor-55b765f954-w76bk" podUID="d0b6ad9f-812f-409d-ad14-b998849981f7" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.34:8080: connect: connection refused" Apr 24 14:55:03.892598 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:03.892561 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-67b80-predictor-5856fd8c6b-q6tmf" podUID="9696db27-0f6d-450e-926c-c66c4ad9ab37" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.36:8080: connect: connection refused" Apr 24 14:55:03.892598 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:03.892587 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-67b80-predictor-868fcc489f-swmll" podUID="bf73fe81-f2ac-4357-b42c-a9e052948498" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.37:8080: connect: connection refused" Apr 24 14:55:04.895314 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:04.895270 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-67b80-predictor-5856fd8c6b-q6tmf" podUID="9696db27-0f6d-450e-926c-c66c4ad9ab37" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.36:8080: connect: connection refused" Apr 24 14:55:04.895658 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:04.895270 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-67b80-predictor-868fcc489f-swmll" podUID="bf73fe81-f2ac-4357-b42c-a9e052948498" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.37:8080: connect: connection refused" Apr 24 14:55:05.647934 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:05.647909 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-d5b84-predictor-5f6dbb578-r92lm" Apr 24 14:55:05.899299 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:05.899199 2570 generic.go:358] "Generic (PLEG): container finished" podID="e5eff5b6-5feb-4514-b30b-6460063dd3b4" containerID="7b85a94ef2a2b6671db28ff0f520a75e6f8b4c33e473ef6e5e7a53e838951251" exitCode=0 Apr 24 14:55:05.899299 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:05.899261 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-d5b84-predictor-5f6dbb578-r92lm" Apr 24 14:55:05.899299 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:05.899284 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-d5b84-predictor-5f6dbb578-r92lm" event={"ID":"e5eff5b6-5feb-4514-b30b-6460063dd3b4","Type":"ContainerDied","Data":"7b85a94ef2a2b6671db28ff0f520a75e6f8b4c33e473ef6e5e7a53e838951251"} Apr 24 14:55:05.899852 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:05.899327 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-d5b84-predictor-5f6dbb578-r92lm" event={"ID":"e5eff5b6-5feb-4514-b30b-6460063dd3b4","Type":"ContainerDied","Data":"b9281461adfd83af2cd89c858a839afcbb56cf50b7b89efd2af321e92adb4e29"} Apr 24 14:55:05.899852 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:05.899348 2570 scope.go:117] "RemoveContainer" containerID="7b85a94ef2a2b6671db28ff0f520a75e6f8b4c33e473ef6e5e7a53e838951251" Apr 24 14:55:05.908295 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:05.908272 2570 scope.go:117] "RemoveContainer" containerID="7b85a94ef2a2b6671db28ff0f520a75e6f8b4c33e473ef6e5e7a53e838951251" Apr 24 14:55:05.908631 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:55:05.908608 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b85a94ef2a2b6671db28ff0f520a75e6f8b4c33e473ef6e5e7a53e838951251\": container with ID starting with 7b85a94ef2a2b6671db28ff0f520a75e6f8b4c33e473ef6e5e7a53e838951251 not found: ID does not exist" containerID="7b85a94ef2a2b6671db28ff0f520a75e6f8b4c33e473ef6e5e7a53e838951251" Apr 24 14:55:05.908685 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:05.908644 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b85a94ef2a2b6671db28ff0f520a75e6f8b4c33e473ef6e5e7a53e838951251"} err="failed to get container status \"7b85a94ef2a2b6671db28ff0f520a75e6f8b4c33e473ef6e5e7a53e838951251\": rpc error: code = NotFound desc = could not find container \"7b85a94ef2a2b6671db28ff0f520a75e6f8b4c33e473ef6e5e7a53e838951251\": container with ID starting with 7b85a94ef2a2b6671db28ff0f520a75e6f8b4c33e473ef6e5e7a53e838951251 not found: ID does not exist" Apr 24 14:55:05.921612 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:05.921575 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-d5b84-predictor-5f6dbb578-r92lm"] Apr 24 14:55:05.925611 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:05.925582 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-d5b84-predictor-5f6dbb578-r92lm"] Apr 24 14:55:06.410764 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:06.410742 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-d5b84-predictor-585896bd67-xqbm5" Apr 24 14:55:06.428968 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:06.428939 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5eff5b6-5feb-4514-b30b-6460063dd3b4" path="/var/lib/kubelet/pods/e5eff5b6-5feb-4514-b30b-6460063dd3b4/volumes" Apr 24 14:55:06.904771 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:06.904741 2570 generic.go:358] "Generic (PLEG): container finished" podID="87e707d2-09bc-4786-b70b-9f3ed3913cd2" containerID="331c2cea28cb6520d91a4d24a070c0e30929cb920dc879180e4b9d567dd4a0fe" exitCode=0 Apr 24 14:55:06.905229 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:06.904788 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-d5b84-predictor-585896bd67-xqbm5" event={"ID":"87e707d2-09bc-4786-b70b-9f3ed3913cd2","Type":"ContainerDied","Data":"331c2cea28cb6520d91a4d24a070c0e30929cb920dc879180e4b9d567dd4a0fe"} Apr 24 14:55:06.905229 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:06.904799 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-d5b84-predictor-585896bd67-xqbm5" Apr 24 14:55:06.905229 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:06.904810 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-d5b84-predictor-585896bd67-xqbm5" event={"ID":"87e707d2-09bc-4786-b70b-9f3ed3913cd2","Type":"ContainerDied","Data":"0bd730dde4833b3a50eaf855ef65bcfa1c6b1dd859ae3cafb4176d59d1eac46b"} Apr 24 14:55:06.905229 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:06.904825 2570 scope.go:117] "RemoveContainer" containerID="331c2cea28cb6520d91a4d24a070c0e30929cb920dc879180e4b9d567dd4a0fe" Apr 24 14:55:06.912241 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:06.912225 2570 scope.go:117] "RemoveContainer" containerID="331c2cea28cb6520d91a4d24a070c0e30929cb920dc879180e4b9d567dd4a0fe" Apr 24 14:55:06.912835 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:55:06.912817 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"331c2cea28cb6520d91a4d24a070c0e30929cb920dc879180e4b9d567dd4a0fe\": container with ID starting with 331c2cea28cb6520d91a4d24a070c0e30929cb920dc879180e4b9d567dd4a0fe not found: ID does not exist" containerID="331c2cea28cb6520d91a4d24a070c0e30929cb920dc879180e4b9d567dd4a0fe" Apr 24 14:55:06.912875 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:06.912842 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"331c2cea28cb6520d91a4d24a070c0e30929cb920dc879180e4b9d567dd4a0fe"} err="failed to get container status \"331c2cea28cb6520d91a4d24a070c0e30929cb920dc879180e4b9d567dd4a0fe\": rpc error: code = NotFound desc = could not find container \"331c2cea28cb6520d91a4d24a070c0e30929cb920dc879180e4b9d567dd4a0fe\": container with ID starting with 331c2cea28cb6520d91a4d24a070c0e30929cb920dc879180e4b9d567dd4a0fe not found: ID does not exist" Apr 24 14:55:06.922143 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:06.922122 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-d5b84-predictor-585896bd67-xqbm5"] Apr 24 14:55:06.928128 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:06.928094 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-d5b84-predictor-585896bd67-xqbm5"] Apr 24 14:55:08.433075 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:08.433037 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87e707d2-09bc-4786-b70b-9f3ed3913cd2" path="/var/lib/kubelet/pods/87e707d2-09bc-4786-b70b-9f3ed3913cd2/volumes" Apr 24 14:55:13.773157 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:13.773093 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-b7286-predictor-55b765f954-w76bk" Apr 24 14:55:13.773536 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:13.773171 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-b7286-predictor-7759f757c9-grvmd" Apr 24 14:55:14.895348 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:14.895303 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-67b80-predictor-5856fd8c6b-q6tmf" podUID="9696db27-0f6d-450e-926c-c66c4ad9ab37" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.36:8080: connect: connection refused" Apr 24 14:55:14.895712 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:14.895306 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-67b80-predictor-868fcc489f-swmll" podUID="bf73fe81-f2ac-4357-b42c-a9e052948498" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.37:8080: connect: connection refused" Apr 24 14:55:24.895316 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:24.895271 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-67b80-predictor-868fcc489f-swmll" podUID="bf73fe81-f2ac-4357-b42c-a9e052948498" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.37:8080: connect: connection refused" Apr 24 14:55:24.895784 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:24.895271 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-67b80-predictor-5856fd8c6b-q6tmf" podUID="9696db27-0f6d-450e-926c-c66c4ad9ab37" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.36:8080: connect: connection refused" Apr 24 14:55:34.895656 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:34.895612 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-67b80-predictor-5856fd8c6b-q6tmf" podUID="9696db27-0f6d-450e-926c-c66c4ad9ab37" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.36:8080: connect: connection refused" Apr 24 14:55:34.896136 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:34.895622 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-67b80-predictor-868fcc489f-swmll" podUID="bf73fe81-f2ac-4357-b42c-a9e052948498" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.37:8080: connect: connection refused" Apr 24 14:55:42.407146 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:42.407097 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-b7286-predictor-55b765f954-w76bk"] Apr 24 14:55:42.407627 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:42.407338 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-b7286-predictor-55b765f954-w76bk" podUID="d0b6ad9f-812f-409d-ad14-b998849981f7" containerName="kserve-container" containerID="cri-o://3d99952639221d8325e25d83e0b0d1fac82ed9c20b8327223a7f0ca591672ce6" gracePeriod=30 Apr 24 14:55:42.419290 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:42.419258 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-9e32c-predictor-5f45f67fc6-7wnjm"] Apr 24 14:55:42.419549 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:42.419535 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="87e707d2-09bc-4786-b70b-9f3ed3913cd2" containerName="kserve-container" Apr 24 14:55:42.419596 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:42.419553 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="87e707d2-09bc-4786-b70b-9f3ed3913cd2" containerName="kserve-container" Apr 24 14:55:42.419596 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:42.419572 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e5eff5b6-5feb-4514-b30b-6460063dd3b4" containerName="kserve-container" Apr 24 14:55:42.419596 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:42.419580 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5eff5b6-5feb-4514-b30b-6460063dd3b4" containerName="kserve-container" Apr 24 14:55:42.419689 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:42.419636 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="e5eff5b6-5feb-4514-b30b-6460063dd3b4" containerName="kserve-container" Apr 24 14:55:42.419689 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:42.419644 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="87e707d2-09bc-4786-b70b-9f3ed3913cd2" containerName="kserve-container" Apr 24 14:55:42.422451 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:42.422434 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-9e32c-predictor-5f45f67fc6-7wnjm" Apr 24 14:55:42.434692 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:42.434667 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-9e32c-predictor-5f45f67fc6-7wnjm" Apr 24 14:55:42.435245 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:42.435218 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-9e32c-predictor-5f45f67fc6-7wnjm"] Apr 24 14:55:42.480818 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:42.480785 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-b7286-predictor-7759f757c9-grvmd"] Apr 24 14:55:42.481031 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:42.481010 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-b7286-predictor-7759f757c9-grvmd" podUID="5d7873a8-aee4-4b1b-8d62-336d952f3d15" containerName="kserve-container" containerID="cri-o://4a44f2c30d8b217569c7eebf148ec52647ff16cba3ab716a3cdfd585620f7986" gracePeriod=30 Apr 24 14:55:42.503427 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:42.501972 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-9e32c-predictor-fc768b47c-zgmgj"] Apr 24 14:55:42.506772 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:42.506723 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-9e32c-predictor-fc768b47c-zgmgj" Apr 24 14:55:42.510960 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:42.510933 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-9e32c-predictor-fc768b47c-zgmgj"] Apr 24 14:55:42.519209 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:42.519138 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-9e32c-predictor-fc768b47c-zgmgj" Apr 24 14:55:42.572863 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:42.572809 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-9e32c-predictor-5f45f67fc6-7wnjm"] Apr 24 14:55:42.665513 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:42.665440 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-9e32c-predictor-fc768b47c-zgmgj"] Apr 24 14:55:42.668557 ip-10-0-129-34 kubenswrapper[2570]: W0424 14:55:42.668527 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fb52561_975e_4429_9f7a_b6d178bbaebd.slice/crio-d59191ccd456f4908d7fe9d776aef5cd677b9bfd0397cc5ef469b231e7aa0e20 WatchSource:0}: Error finding container d59191ccd456f4908d7fe9d776aef5cd677b9bfd0397cc5ef469b231e7aa0e20: Status 404 returned error can't find the container with id d59191ccd456f4908d7fe9d776aef5cd677b9bfd0397cc5ef469b231e7aa0e20 Apr 24 14:55:43.013621 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:43.013584 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-9e32c-predictor-5f45f67fc6-7wnjm" event={"ID":"7b9c5f56-9b5c-495c-a67b-8a03d7a936d1","Type":"ContainerStarted","Data":"74cdb759b7801face65e2bae3f92513c18f58f305dd9e385255ad8bc1ae50756"} Apr 24 14:55:43.013621 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:43.013629 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-9e32c-predictor-5f45f67fc6-7wnjm" event={"ID":"7b9c5f56-9b5c-495c-a67b-8a03d7a936d1","Type":"ContainerStarted","Data":"7c39a1be0909a6554807fe7d62b8849919f599df07237b7e31ecb80ea5fa4b98"} Apr 24 14:55:43.013887 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:43.013766 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-9e32c-predictor-5f45f67fc6-7wnjm" Apr 24 14:55:43.014959 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:43.014933 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-9e32c-predictor-fc768b47c-zgmgj" event={"ID":"8fb52561-975e-4429-9f7a-b6d178bbaebd","Type":"ContainerStarted","Data":"658776bbe67db230e586e93371481e433ecd7eb14f0f2dd5f77763f9cbc562bc"} Apr 24 14:55:43.014959 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:43.014961 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-9e32c-predictor-fc768b47c-zgmgj" event={"ID":"8fb52561-975e-4429-9f7a-b6d178bbaebd","Type":"ContainerStarted","Data":"d59191ccd456f4908d7fe9d776aef5cd677b9bfd0397cc5ef469b231e7aa0e20"} Apr 24 14:55:43.015184 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:43.015129 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/error-404-isvc-9e32c-predictor-fc768b47c-zgmgj" Apr 24 14:55:43.015322 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:43.015300 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-9e32c-predictor-5f45f67fc6-7wnjm" podUID="7b9c5f56-9b5c-495c-a67b-8a03d7a936d1" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.38:8080: connect: connection refused" Apr 24 14:55:43.016031 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:43.016011 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-9e32c-predictor-fc768b47c-zgmgj" podUID="8fb52561-975e-4429-9f7a-b6d178bbaebd" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.39:8080: connect: connection refused" Apr 24 14:55:43.027962 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:43.027921 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-9e32c-predictor-5f45f67fc6-7wnjm" podStartSLOduration=1.027908831 podStartE2EDuration="1.027908831s" podCreationTimestamp="2026-04-24 14:55:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 14:55:43.027387921 +0000 UTC m=+1921.152791738" watchObservedRunningTime="2026-04-24 14:55:43.027908831 +0000 UTC m=+1921.153312645" Apr 24 14:55:43.041218 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:43.041163 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/error-404-isvc-9e32c-predictor-fc768b47c-zgmgj" podStartSLOduration=1.041147583 podStartE2EDuration="1.041147583s" podCreationTimestamp="2026-04-24 14:55:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 14:55:43.040215539 +0000 UTC m=+1921.165619356" watchObservedRunningTime="2026-04-24 14:55:43.041147583 +0000 UTC m=+1921.166551483" Apr 24 14:55:43.771518 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:43.771475 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-b7286-predictor-55b765f954-w76bk" podUID="d0b6ad9f-812f-409d-ad14-b998849981f7" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.34:8080: connect: connection refused" Apr 24 14:55:43.771887 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:43.771475 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-b7286-predictor-7759f757c9-grvmd" podUID="5d7873a8-aee4-4b1b-8d62-336d952f3d15" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.35:8080: connect: connection refused" Apr 24 14:55:44.017766 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:44.017725 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-9e32c-predictor-fc768b47c-zgmgj" podUID="8fb52561-975e-4429-9f7a-b6d178bbaebd" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.39:8080: connect: connection refused" Apr 24 14:55:44.017940 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:44.017819 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-9e32c-predictor-5f45f67fc6-7wnjm" podUID="7b9c5f56-9b5c-495c-a67b-8a03d7a936d1" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.38:8080: connect: connection refused" Apr 24 14:55:44.896137 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:44.896079 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-67b80-predictor-868fcc489f-swmll" podUID="bf73fe81-f2ac-4357-b42c-a9e052948498" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.37:8080: connect: connection refused" Apr 24 14:55:44.896534 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:44.896079 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-67b80-predictor-5856fd8c6b-q6tmf" podUID="9696db27-0f6d-450e-926c-c66c4ad9ab37" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.36:8080: connect: connection refused" Apr 24 14:55:46.513667 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:55:46.513630 2570 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d7873a8_aee4_4b1b_8d62_336d952f3d15.slice/crio-4a44f2c30d8b217569c7eebf148ec52647ff16cba3ab716a3cdfd585620f7986.scope\": RecentStats: unable to find data in memory cache]" Apr 24 14:55:46.602457 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:46.602435 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-b7286-predictor-55b765f954-w76bk" Apr 24 14:55:46.715047 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:46.715018 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-b7286-predictor-7759f757c9-grvmd" Apr 24 14:55:47.025989 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:47.025954 2570 generic.go:358] "Generic (PLEG): container finished" podID="5d7873a8-aee4-4b1b-8d62-336d952f3d15" containerID="4a44f2c30d8b217569c7eebf148ec52647ff16cba3ab716a3cdfd585620f7986" exitCode=0 Apr 24 14:55:47.026184 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:47.026015 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-b7286-predictor-7759f757c9-grvmd" Apr 24 14:55:47.026184 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:47.026046 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-b7286-predictor-7759f757c9-grvmd" event={"ID":"5d7873a8-aee4-4b1b-8d62-336d952f3d15","Type":"ContainerDied","Data":"4a44f2c30d8b217569c7eebf148ec52647ff16cba3ab716a3cdfd585620f7986"} Apr 24 14:55:47.026184 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:47.026087 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-b7286-predictor-7759f757c9-grvmd" event={"ID":"5d7873a8-aee4-4b1b-8d62-336d952f3d15","Type":"ContainerDied","Data":"9f439a4c1f0a4296c9d10d7bfe925f388fe5b532ffd0390ab3c08172b7904cf2"} Apr 24 14:55:47.026184 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:47.026122 2570 scope.go:117] "RemoveContainer" containerID="4a44f2c30d8b217569c7eebf148ec52647ff16cba3ab716a3cdfd585620f7986" Apr 24 14:55:47.027191 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:47.027171 2570 generic.go:358] "Generic (PLEG): container finished" podID="d0b6ad9f-812f-409d-ad14-b998849981f7" containerID="3d99952639221d8325e25d83e0b0d1fac82ed9c20b8327223a7f0ca591672ce6" exitCode=0 Apr 24 14:55:47.027272 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:47.027222 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-b7286-predictor-55b765f954-w76bk" event={"ID":"d0b6ad9f-812f-409d-ad14-b998849981f7","Type":"ContainerDied","Data":"3d99952639221d8325e25d83e0b0d1fac82ed9c20b8327223a7f0ca591672ce6"} Apr 24 14:55:47.027272 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:47.027240 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-b7286-predictor-55b765f954-w76bk" event={"ID":"d0b6ad9f-812f-409d-ad14-b998849981f7","Type":"ContainerDied","Data":"9475f18882953de074a0992b0fb2b9248ecb7dddc703d1a289263f28acaf59e0"} Apr 24 14:55:47.027349 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:47.027277 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-b7286-predictor-55b765f954-w76bk" Apr 24 14:55:47.035479 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:47.035457 2570 scope.go:117] "RemoveContainer" containerID="4a44f2c30d8b217569c7eebf148ec52647ff16cba3ab716a3cdfd585620f7986" Apr 24 14:55:47.035787 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:55:47.035727 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a44f2c30d8b217569c7eebf148ec52647ff16cba3ab716a3cdfd585620f7986\": container with ID starting with 4a44f2c30d8b217569c7eebf148ec52647ff16cba3ab716a3cdfd585620f7986 not found: ID does not exist" containerID="4a44f2c30d8b217569c7eebf148ec52647ff16cba3ab716a3cdfd585620f7986" Apr 24 14:55:47.035787 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:47.035753 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a44f2c30d8b217569c7eebf148ec52647ff16cba3ab716a3cdfd585620f7986"} err="failed to get container status \"4a44f2c30d8b217569c7eebf148ec52647ff16cba3ab716a3cdfd585620f7986\": rpc error: code = NotFound desc = could not find container \"4a44f2c30d8b217569c7eebf148ec52647ff16cba3ab716a3cdfd585620f7986\": container with ID starting with 4a44f2c30d8b217569c7eebf148ec52647ff16cba3ab716a3cdfd585620f7986 not found: ID does not exist" Apr 24 14:55:47.035787 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:47.035769 2570 scope.go:117] "RemoveContainer" containerID="3d99952639221d8325e25d83e0b0d1fac82ed9c20b8327223a7f0ca591672ce6" Apr 24 14:55:47.042980 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:47.042964 2570 scope.go:117] "RemoveContainer" containerID="3d99952639221d8325e25d83e0b0d1fac82ed9c20b8327223a7f0ca591672ce6" Apr 24 14:55:47.043242 ip-10-0-129-34 kubenswrapper[2570]: E0424 14:55:47.043224 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d99952639221d8325e25d83e0b0d1fac82ed9c20b8327223a7f0ca591672ce6\": container with ID starting with 3d99952639221d8325e25d83e0b0d1fac82ed9c20b8327223a7f0ca591672ce6 not found: ID does not exist" containerID="3d99952639221d8325e25d83e0b0d1fac82ed9c20b8327223a7f0ca591672ce6" Apr 24 14:55:47.043293 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:47.043250 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d99952639221d8325e25d83e0b0d1fac82ed9c20b8327223a7f0ca591672ce6"} err="failed to get container status \"3d99952639221d8325e25d83e0b0d1fac82ed9c20b8327223a7f0ca591672ce6\": rpc error: code = NotFound desc = could not find container \"3d99952639221d8325e25d83e0b0d1fac82ed9c20b8327223a7f0ca591672ce6\": container with ID starting with 3d99952639221d8325e25d83e0b0d1fac82ed9c20b8327223a7f0ca591672ce6 not found: ID does not exist" Apr 24 14:55:47.050452 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:47.050431 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-b7286-predictor-55b765f954-w76bk"] Apr 24 14:55:47.053737 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:47.053716 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-b7286-predictor-55b765f954-w76bk"] Apr 24 14:55:47.063050 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:47.063032 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-b7286-predictor-7759f757c9-grvmd"] Apr 24 14:55:47.066946 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:47.066928 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-b7286-predictor-7759f757c9-grvmd"] Apr 24 14:55:48.428831 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:48.428798 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d7873a8-aee4-4b1b-8d62-336d952f3d15" path="/var/lib/kubelet/pods/5d7873a8-aee4-4b1b-8d62-336d952f3d15/volumes" Apr 24 14:55:48.429227 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:48.429032 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0b6ad9f-812f-409d-ad14-b998849981f7" path="/var/lib/kubelet/pods/d0b6ad9f-812f-409d-ad14-b998849981f7/volumes" Apr 24 14:55:54.018340 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:54.018293 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-9e32c-predictor-fc768b47c-zgmgj" podUID="8fb52561-975e-4429-9f7a-b6d178bbaebd" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.39:8080: connect: connection refused" Apr 24 14:55:54.018803 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:54.018293 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-9e32c-predictor-5f45f67fc6-7wnjm" podUID="7b9c5f56-9b5c-495c-a67b-8a03d7a936d1" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.38:8080: connect: connection refused" Apr 24 14:55:54.896286 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:54.896252 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-67b80-predictor-5856fd8c6b-q6tmf" Apr 24 14:55:54.896475 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:55:54.896304 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-67b80-predictor-868fcc489f-swmll" Apr 24 14:56:04.018503 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:56:04.018456 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-9e32c-predictor-5f45f67fc6-7wnjm" podUID="7b9c5f56-9b5c-495c-a67b-8a03d7a936d1" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.38:8080: connect: connection refused" Apr 24 14:56:04.018894 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:56:04.018467 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-9e32c-predictor-fc768b47c-zgmgj" podUID="8fb52561-975e-4429-9f7a-b6d178bbaebd" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.39:8080: connect: connection refused" Apr 24 14:56:14.018477 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:56:14.018432 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-9e32c-predictor-5f45f67fc6-7wnjm" podUID="7b9c5f56-9b5c-495c-a67b-8a03d7a936d1" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.38:8080: connect: connection refused" Apr 24 14:56:14.018937 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:56:14.018435 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-9e32c-predictor-fc768b47c-zgmgj" podUID="8fb52561-975e-4429-9f7a-b6d178bbaebd" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.39:8080: connect: connection refused" Apr 24 14:56:24.018572 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:56:24.018531 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-9e32c-predictor-5f45f67fc6-7wnjm" podUID="7b9c5f56-9b5c-495c-a67b-8a03d7a936d1" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.38:8080: connect: connection refused" Apr 24 14:56:24.019013 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:56:24.018526 2570 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/error-404-isvc-9e32c-predictor-fc768b47c-zgmgj" podUID="8fb52561-975e-4429-9f7a-b6d178bbaebd" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.39:8080: connect: connection refused" Apr 24 14:56:34.019305 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:56:34.019270 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/error-404-isvc-9e32c-predictor-fc768b47c-zgmgj" Apr 24 14:56:34.019699 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:56:34.019337 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-9e32c-predictor-5f45f67fc6-7wnjm" Apr 24 14:58:42.455350 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:58:42.455235 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dz989_090e3afb-c111-4bf0-a107-0156c2f3a0f2/ovn-acl-logging/0.log" Apr 24 14:58:42.459731 ip-10-0-129-34 kubenswrapper[2570]: I0424 14:58:42.459711 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dz989_090e3afb-c111-4bf0-a107-0156c2f3a0f2/ovn-acl-logging/0.log" Apr 24 15:03:42.472575 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:03:42.472462 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dz989_090e3afb-c111-4bf0-a107-0156c2f3a0f2/ovn-acl-logging/0.log" Apr 24 15:03:42.478360 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:03:42.478343 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dz989_090e3afb-c111-4bf0-a107-0156c2f3a0f2/ovn-acl-logging/0.log" Apr 24 15:05:07.292869 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:05:07.292835 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-9e32c-predictor-5f45f67fc6-7wnjm"] Apr 24 15:05:07.293355 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:05:07.293079 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-9e32c-predictor-5f45f67fc6-7wnjm" podUID="7b9c5f56-9b5c-495c-a67b-8a03d7a936d1" containerName="kserve-container" containerID="cri-o://74cdb759b7801face65e2bae3f92513c18f58f305dd9e385255ad8bc1ae50756" gracePeriod=30 Apr 24 15:05:07.325806 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:05:07.325762 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-9e32c-predictor-fc768b47c-zgmgj"] Apr 24 15:05:07.326015 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:05:07.325994 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-9e32c-predictor-fc768b47c-zgmgj" podUID="8fb52561-975e-4429-9f7a-b6d178bbaebd" containerName="kserve-container" containerID="cri-o://658776bbe67db230e586e93371481e433ecd7eb14f0f2dd5f77763f9cbc562bc" gracePeriod=30 Apr 24 15:05:10.359015 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:05:10.358990 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-9e32c-predictor-5f45f67fc6-7wnjm" Apr 24 15:05:10.386194 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:05:10.386174 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-9e32c-predictor-fc768b47c-zgmgj" Apr 24 15:05:10.576266 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:05:10.576179 2570 generic.go:358] "Generic (PLEG): container finished" podID="7b9c5f56-9b5c-495c-a67b-8a03d7a936d1" containerID="74cdb759b7801face65e2bae3f92513c18f58f305dd9e385255ad8bc1ae50756" exitCode=0 Apr 24 15:05:10.576266 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:05:10.576256 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-9e32c-predictor-5f45f67fc6-7wnjm" Apr 24 15:05:10.576489 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:05:10.576264 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-9e32c-predictor-5f45f67fc6-7wnjm" event={"ID":"7b9c5f56-9b5c-495c-a67b-8a03d7a936d1","Type":"ContainerDied","Data":"74cdb759b7801face65e2bae3f92513c18f58f305dd9e385255ad8bc1ae50756"} Apr 24 15:05:10.576489 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:05:10.576297 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-9e32c-predictor-5f45f67fc6-7wnjm" event={"ID":"7b9c5f56-9b5c-495c-a67b-8a03d7a936d1","Type":"ContainerDied","Data":"7c39a1be0909a6554807fe7d62b8849919f599df07237b7e31ecb80ea5fa4b98"} Apr 24 15:05:10.576489 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:05:10.576320 2570 scope.go:117] "RemoveContainer" containerID="74cdb759b7801face65e2bae3f92513c18f58f305dd9e385255ad8bc1ae50756" Apr 24 15:05:10.577515 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:05:10.577491 2570 generic.go:358] "Generic (PLEG): container finished" podID="8fb52561-975e-4429-9f7a-b6d178bbaebd" containerID="658776bbe67db230e586e93371481e433ecd7eb14f0f2dd5f77763f9cbc562bc" exitCode=0 Apr 24 15:05:10.577630 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:05:10.577538 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-9e32c-predictor-fc768b47c-zgmgj" event={"ID":"8fb52561-975e-4429-9f7a-b6d178bbaebd","Type":"ContainerDied","Data":"658776bbe67db230e586e93371481e433ecd7eb14f0f2dd5f77763f9cbc562bc"} Apr 24 15:05:10.577630 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:05:10.577556 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-9e32c-predictor-fc768b47c-zgmgj" event={"ID":"8fb52561-975e-4429-9f7a-b6d178bbaebd","Type":"ContainerDied","Data":"d59191ccd456f4908d7fe9d776aef5cd677b9bfd0397cc5ef469b231e7aa0e20"} Apr 24 15:05:10.577630 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:05:10.577556 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-9e32c-predictor-fc768b47c-zgmgj" Apr 24 15:05:10.583778 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:05:10.583762 2570 scope.go:117] "RemoveContainer" containerID="74cdb759b7801face65e2bae3f92513c18f58f305dd9e385255ad8bc1ae50756" Apr 24 15:05:10.583990 ip-10-0-129-34 kubenswrapper[2570]: E0424 15:05:10.583973 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74cdb759b7801face65e2bae3f92513c18f58f305dd9e385255ad8bc1ae50756\": container with ID starting with 74cdb759b7801face65e2bae3f92513c18f58f305dd9e385255ad8bc1ae50756 not found: ID does not exist" containerID="74cdb759b7801face65e2bae3f92513c18f58f305dd9e385255ad8bc1ae50756" Apr 24 15:05:10.584036 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:05:10.583997 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74cdb759b7801face65e2bae3f92513c18f58f305dd9e385255ad8bc1ae50756"} err="failed to get container status \"74cdb759b7801face65e2bae3f92513c18f58f305dd9e385255ad8bc1ae50756\": rpc error: code = NotFound desc = could not find container \"74cdb759b7801face65e2bae3f92513c18f58f305dd9e385255ad8bc1ae50756\": container with ID starting with 74cdb759b7801face65e2bae3f92513c18f58f305dd9e385255ad8bc1ae50756 not found: ID does not exist" Apr 24 15:05:10.584036 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:05:10.584010 2570 scope.go:117] "RemoveContainer" containerID="658776bbe67db230e586e93371481e433ecd7eb14f0f2dd5f77763f9cbc562bc" Apr 24 15:05:10.590419 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:05:10.590368 2570 scope.go:117] "RemoveContainer" containerID="658776bbe67db230e586e93371481e433ecd7eb14f0f2dd5f77763f9cbc562bc" Apr 24 15:05:10.590651 ip-10-0-129-34 kubenswrapper[2570]: E0424 15:05:10.590627 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"658776bbe67db230e586e93371481e433ecd7eb14f0f2dd5f77763f9cbc562bc\": container with ID starting with 658776bbe67db230e586e93371481e433ecd7eb14f0f2dd5f77763f9cbc562bc not found: ID does not exist" containerID="658776bbe67db230e586e93371481e433ecd7eb14f0f2dd5f77763f9cbc562bc" Apr 24 15:05:10.590724 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:05:10.590661 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"658776bbe67db230e586e93371481e433ecd7eb14f0f2dd5f77763f9cbc562bc"} err="failed to get container status \"658776bbe67db230e586e93371481e433ecd7eb14f0f2dd5f77763f9cbc562bc\": rpc error: code = NotFound desc = could not find container \"658776bbe67db230e586e93371481e433ecd7eb14f0f2dd5f77763f9cbc562bc\": container with ID starting with 658776bbe67db230e586e93371481e433ecd7eb14f0f2dd5f77763f9cbc562bc not found: ID does not exist" Apr 24 15:05:10.591853 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:05:10.591831 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-9e32c-predictor-fc768b47c-zgmgj"] Apr 24 15:05:10.594032 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:05:10.594011 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-9e32c-predictor-fc768b47c-zgmgj"] Apr 24 15:05:10.605354 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:05:10.605334 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-9e32c-predictor-5f45f67fc6-7wnjm"] Apr 24 15:05:10.606224 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:05:10.606208 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-9e32c-predictor-5f45f67fc6-7wnjm"] Apr 24 15:05:12.428483 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:05:12.428445 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b9c5f56-9b5c-495c-a67b-8a03d7a936d1" path="/var/lib/kubelet/pods/7b9c5f56-9b5c-495c-a67b-8a03d7a936d1/volumes" Apr 24 15:05:12.428838 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:05:12.428680 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fb52561-975e-4429-9f7a-b6d178bbaebd" path="/var/lib/kubelet/pods/8fb52561-975e-4429-9f7a-b6d178bbaebd/volumes" Apr 24 15:08:42.490071 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:08:42.489966 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dz989_090e3afb-c111-4bf0-a107-0156c2f3a0f2/ovn-acl-logging/0.log" Apr 24 15:08:42.496256 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:08:42.496231 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dz989_090e3afb-c111-4bf0-a107-0156c2f3a0f2/ovn-acl-logging/0.log" Apr 24 15:12:31.579091 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:12:31.579055 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-67b80-predictor-5856fd8c6b-q6tmf"] Apr 24 15:12:31.579631 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:12:31.579312 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-67b80-predictor-5856fd8c6b-q6tmf" podUID="9696db27-0f6d-450e-926c-c66c4ad9ab37" containerName="kserve-container" containerID="cri-o://d3a2b33c2f4602855da66be079cd20af652d402b35661aec19d752ba5acd843f" gracePeriod=30 Apr 24 15:12:31.629379 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:12:31.629346 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-67b80-predictor-868fcc489f-swmll"] Apr 24 15:12:31.629609 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:12:31.629587 2570 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/error-404-isvc-67b80-predictor-868fcc489f-swmll" podUID="bf73fe81-f2ac-4357-b42c-a9e052948498" containerName="kserve-container" containerID="cri-o://9fb98c0d943626acf7f0271434f2e85034a8e4c969c0a8fa3bb4025cb542187a" gracePeriod=30 Apr 24 15:12:34.731135 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:12:34.731087 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-67b80-predictor-5856fd8c6b-q6tmf" Apr 24 15:12:34.742846 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:12:34.742780 2570 generic.go:358] "Generic (PLEG): container finished" podID="bf73fe81-f2ac-4357-b42c-a9e052948498" containerID="9fb98c0d943626acf7f0271434f2e85034a8e4c969c0a8fa3bb4025cb542187a" exitCode=0 Apr 24 15:12:34.742964 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:12:34.742856 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-67b80-predictor-868fcc489f-swmll" event={"ID":"bf73fe81-f2ac-4357-b42c-a9e052948498","Type":"ContainerDied","Data":"9fb98c0d943626acf7f0271434f2e85034a8e4c969c0a8fa3bb4025cb542187a"} Apr 24 15:12:34.743891 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:12:34.743871 2570 generic.go:358] "Generic (PLEG): container finished" podID="9696db27-0f6d-450e-926c-c66c4ad9ab37" containerID="d3a2b33c2f4602855da66be079cd20af652d402b35661aec19d752ba5acd843f" exitCode=0 Apr 24 15:12:34.743980 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:12:34.743908 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-67b80-predictor-5856fd8c6b-q6tmf" event={"ID":"9696db27-0f6d-450e-926c-c66c4ad9ab37","Type":"ContainerDied","Data":"d3a2b33c2f4602855da66be079cd20af652d402b35661aec19d752ba5acd843f"} Apr 24 15:12:34.743980 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:12:34.743926 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-67b80-predictor-5856fd8c6b-q6tmf" event={"ID":"9696db27-0f6d-450e-926c-c66c4ad9ab37","Type":"ContainerDied","Data":"c33801de900602dceec33e29bb39116f2fd516fb8c8bdb81770c405104f79bcb"} Apr 24 15:12:34.743980 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:12:34.743943 2570 scope.go:117] "RemoveContainer" containerID="d3a2b33c2f4602855da66be079cd20af652d402b35661aec19d752ba5acd843f" Apr 24 15:12:34.743980 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:12:34.743947 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-67b80-predictor-5856fd8c6b-q6tmf" Apr 24 15:12:34.752779 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:12:34.752762 2570 scope.go:117] "RemoveContainer" containerID="d3a2b33c2f4602855da66be079cd20af652d402b35661aec19d752ba5acd843f" Apr 24 15:12:34.753068 ip-10-0-129-34 kubenswrapper[2570]: E0424 15:12:34.753042 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3a2b33c2f4602855da66be079cd20af652d402b35661aec19d752ba5acd843f\": container with ID starting with d3a2b33c2f4602855da66be079cd20af652d402b35661aec19d752ba5acd843f not found: ID does not exist" containerID="d3a2b33c2f4602855da66be079cd20af652d402b35661aec19d752ba5acd843f" Apr 24 15:12:34.753178 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:12:34.753078 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3a2b33c2f4602855da66be079cd20af652d402b35661aec19d752ba5acd843f"} err="failed to get container status \"d3a2b33c2f4602855da66be079cd20af652d402b35661aec19d752ba5acd843f\": rpc error: code = NotFound desc = could not find container \"d3a2b33c2f4602855da66be079cd20af652d402b35661aec19d752ba5acd843f\": container with ID starting with d3a2b33c2f4602855da66be079cd20af652d402b35661aec19d752ba5acd843f not found: ID does not exist" Apr 24 15:12:34.766765 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:12:34.766741 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-67b80-predictor-5856fd8c6b-q6tmf"] Apr 24 15:12:34.770190 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:12:34.770171 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-67b80-predictor-5856fd8c6b-q6tmf"] Apr 24 15:12:34.770408 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:12:34.770394 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-67b80-predictor-868fcc489f-swmll" Apr 24 15:12:35.747767 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:12:35.747683 2570 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/error-404-isvc-67b80-predictor-868fcc489f-swmll" Apr 24 15:12:35.748207 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:12:35.747689 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/error-404-isvc-67b80-predictor-868fcc489f-swmll" event={"ID":"bf73fe81-f2ac-4357-b42c-a9e052948498","Type":"ContainerDied","Data":"36dbeb0a98f4255512c52162f01eae41a48e9a5f01faaaf0ed9f54e2756e9fe7"} Apr 24 15:12:35.748207 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:12:35.747807 2570 scope.go:117] "RemoveContainer" containerID="9fb98c0d943626acf7f0271434f2e85034a8e4c969c0a8fa3bb4025cb542187a" Apr 24 15:12:35.770740 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:12:35.770708 2570 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-67b80-predictor-868fcc489f-swmll"] Apr 24 15:12:35.775696 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:12:35.775667 2570 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/error-404-isvc-67b80-predictor-868fcc489f-swmll"] Apr 24 15:12:36.428026 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:12:36.427982 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9696db27-0f6d-450e-926c-c66c4ad9ab37" path="/var/lib/kubelet/pods/9696db27-0f6d-450e-926c-c66c4ad9ab37/volumes" Apr 24 15:12:36.428336 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:12:36.428316 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf73fe81-f2ac-4357-b42c-a9e052948498" path="/var/lib/kubelet/pods/bf73fe81-f2ac-4357-b42c-a9e052948498/volumes" Apr 24 15:12:59.942263 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:12:59.942178 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_global-pull-secret-syncer-wz4gq_0b4d949f-8da9-4af5-9fe4-ee71f6d2d56b/global-pull-secret-syncer/0.log" Apr 24 15:12:59.992045 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:12:59.992007 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_konnectivity-agent-dlcwf_b63a7014-b666-4162-b36c-f215db9ea517/konnectivity-agent/0.log" Apr 24 15:13:00.073057 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:00.073021 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_kube-apiserver-proxy-ip-10-0-129-34.ec2.internal_52f7ef5b748605fa2e3167b9e181ddfa/haproxy/0.log" Apr 24 15:13:03.856037 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:03.856005 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-68hzk_22aed69f-edd2-431c-9fc1-a4244441cfaf/node-exporter/0.log" Apr 24 15:13:03.878171 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:03.878143 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-68hzk_22aed69f-edd2-431c-9fc1-a4244441cfaf/kube-rbac-proxy/0.log" Apr 24 15:13:03.902359 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:03.902322 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-68hzk_22aed69f-edd2-431c-9fc1-a4244441cfaf/init-textfile/0.log" Apr 24 15:13:07.717418 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.717379 2570 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8f8q8/perf-node-gather-daemonset-mmmr4"] Apr 24 15:13:07.717788 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.717617 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d0b6ad9f-812f-409d-ad14-b998849981f7" containerName="kserve-container" Apr 24 15:13:07.717788 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.717628 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0b6ad9f-812f-409d-ad14-b998849981f7" containerName="kserve-container" Apr 24 15:13:07.717788 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.717639 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bf73fe81-f2ac-4357-b42c-a9e052948498" containerName="kserve-container" Apr 24 15:13:07.717788 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.717645 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf73fe81-f2ac-4357-b42c-a9e052948498" containerName="kserve-container" Apr 24 15:13:07.717788 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.717655 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9696db27-0f6d-450e-926c-c66c4ad9ab37" containerName="kserve-container" Apr 24 15:13:07.717788 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.717661 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="9696db27-0f6d-450e-926c-c66c4ad9ab37" containerName="kserve-container" Apr 24 15:13:07.717788 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.717673 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8fb52561-975e-4429-9f7a-b6d178bbaebd" containerName="kserve-container" Apr 24 15:13:07.717788 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.717678 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb52561-975e-4429-9f7a-b6d178bbaebd" containerName="kserve-container" Apr 24 15:13:07.717788 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.717685 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5d7873a8-aee4-4b1b-8d62-336d952f3d15" containerName="kserve-container" Apr 24 15:13:07.717788 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.717691 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d7873a8-aee4-4b1b-8d62-336d952f3d15" containerName="kserve-container" Apr 24 15:13:07.717788 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.717696 2570 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7b9c5f56-9b5c-495c-a67b-8a03d7a936d1" containerName="kserve-container" Apr 24 15:13:07.717788 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.717701 2570 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b9c5f56-9b5c-495c-a67b-8a03d7a936d1" containerName="kserve-container" Apr 24 15:13:07.717788 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.717737 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="8fb52561-975e-4429-9f7a-b6d178bbaebd" containerName="kserve-container" Apr 24 15:13:07.717788 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.717745 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="bf73fe81-f2ac-4357-b42c-a9e052948498" containerName="kserve-container" Apr 24 15:13:07.717788 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.717751 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="7b9c5f56-9b5c-495c-a67b-8a03d7a936d1" containerName="kserve-container" Apr 24 15:13:07.717788 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.717758 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="d0b6ad9f-812f-409d-ad14-b998849981f7" containerName="kserve-container" Apr 24 15:13:07.717788 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.717766 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="9696db27-0f6d-450e-926c-c66c4ad9ab37" containerName="kserve-container" Apr 24 15:13:07.717788 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.717772 2570 memory_manager.go:356] "RemoveStaleState removing state" podUID="5d7873a8-aee4-4b1b-8d62-336d952f3d15" containerName="kserve-container" Apr 24 15:13:07.720547 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.720531 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8f8q8/perf-node-gather-daemonset-mmmr4" Apr 24 15:13:07.723355 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.723332 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-8f8q8\"/\"kube-root-ca.crt\"" Apr 24 15:13:07.723535 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.723523 2570 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-8f8q8\"/\"default-dockercfg-p8tcf\"" Apr 24 15:13:07.724655 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.724640 2570 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-8f8q8\"/\"openshift-service-ca.crt\"" Apr 24 15:13:07.729846 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.729818 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8f8q8/perf-node-gather-daemonset-mmmr4"] Apr 24 15:13:07.796937 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.796907 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-t8gtr_c8938f71-608b-4cd4-ae9c-3fee7fdcb899/dns/0.log" Apr 24 15:13:07.810681 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.810656 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/978bfab9-4e01-4109-84a3-58a38fc77179-lib-modules\") pod \"perf-node-gather-daemonset-mmmr4\" (UID: \"978bfab9-4e01-4109-84a3-58a38fc77179\") " pod="openshift-must-gather-8f8q8/perf-node-gather-daemonset-mmmr4" Apr 24 15:13:07.810847 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.810687 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/978bfab9-4e01-4109-84a3-58a38fc77179-proc\") pod \"perf-node-gather-daemonset-mmmr4\" (UID: \"978bfab9-4e01-4109-84a3-58a38fc77179\") " pod="openshift-must-gather-8f8q8/perf-node-gather-daemonset-mmmr4" Apr 24 15:13:07.810847 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.810710 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/978bfab9-4e01-4109-84a3-58a38fc77179-sys\") pod \"perf-node-gather-daemonset-mmmr4\" (UID: \"978bfab9-4e01-4109-84a3-58a38fc77179\") " pod="openshift-must-gather-8f8q8/perf-node-gather-daemonset-mmmr4" Apr 24 15:13:07.810847 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.810751 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-664fq\" (UniqueName: \"kubernetes.io/projected/978bfab9-4e01-4109-84a3-58a38fc77179-kube-api-access-664fq\") pod \"perf-node-gather-daemonset-mmmr4\" (UID: \"978bfab9-4e01-4109-84a3-58a38fc77179\") " pod="openshift-must-gather-8f8q8/perf-node-gather-daemonset-mmmr4" Apr 24 15:13:07.810847 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.810812 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/978bfab9-4e01-4109-84a3-58a38fc77179-podres\") pod \"perf-node-gather-daemonset-mmmr4\" (UID: \"978bfab9-4e01-4109-84a3-58a38fc77179\") " pod="openshift-must-gather-8f8q8/perf-node-gather-daemonset-mmmr4" Apr 24 15:13:07.816632 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.816612 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-t8gtr_c8938f71-608b-4cd4-ae9c-3fee7fdcb899/kube-rbac-proxy/0.log" Apr 24 15:13:07.858048 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.858023 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-cmhmr_7966ddd8-be1a-45a0-8020-2cd96b2fd595/dns-node-resolver/0.log" Apr 24 15:13:07.911373 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.911339 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/978bfab9-4e01-4109-84a3-58a38fc77179-lib-modules\") pod \"perf-node-gather-daemonset-mmmr4\" (UID: \"978bfab9-4e01-4109-84a3-58a38fc77179\") " pod="openshift-must-gather-8f8q8/perf-node-gather-daemonset-mmmr4" Apr 24 15:13:07.911570 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.911380 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/978bfab9-4e01-4109-84a3-58a38fc77179-proc\") pod \"perf-node-gather-daemonset-mmmr4\" (UID: \"978bfab9-4e01-4109-84a3-58a38fc77179\") " pod="openshift-must-gather-8f8q8/perf-node-gather-daemonset-mmmr4" Apr 24 15:13:07.911570 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.911403 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/978bfab9-4e01-4109-84a3-58a38fc77179-sys\") pod \"perf-node-gather-daemonset-mmmr4\" (UID: \"978bfab9-4e01-4109-84a3-58a38fc77179\") " pod="openshift-must-gather-8f8q8/perf-node-gather-daemonset-mmmr4" Apr 24 15:13:07.911570 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.911460 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/978bfab9-4e01-4109-84a3-58a38fc77179-sys\") pod \"perf-node-gather-daemonset-mmmr4\" (UID: \"978bfab9-4e01-4109-84a3-58a38fc77179\") " pod="openshift-must-gather-8f8q8/perf-node-gather-daemonset-mmmr4" Apr 24 15:13:07.911570 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.911467 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/978bfab9-4e01-4109-84a3-58a38fc77179-proc\") pod \"perf-node-gather-daemonset-mmmr4\" (UID: \"978bfab9-4e01-4109-84a3-58a38fc77179\") " pod="openshift-must-gather-8f8q8/perf-node-gather-daemonset-mmmr4" Apr 24 15:13:07.911570 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.911488 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-664fq\" (UniqueName: \"kubernetes.io/projected/978bfab9-4e01-4109-84a3-58a38fc77179-kube-api-access-664fq\") pod \"perf-node-gather-daemonset-mmmr4\" (UID: \"978bfab9-4e01-4109-84a3-58a38fc77179\") " pod="openshift-must-gather-8f8q8/perf-node-gather-daemonset-mmmr4" Apr 24 15:13:07.911570 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.911516 2570 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/978bfab9-4e01-4109-84a3-58a38fc77179-podres\") pod \"perf-node-gather-daemonset-mmmr4\" (UID: \"978bfab9-4e01-4109-84a3-58a38fc77179\") " pod="openshift-must-gather-8f8q8/perf-node-gather-daemonset-mmmr4" Apr 24 15:13:07.911570 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.911538 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/978bfab9-4e01-4109-84a3-58a38fc77179-lib-modules\") pod \"perf-node-gather-daemonset-mmmr4\" (UID: \"978bfab9-4e01-4109-84a3-58a38fc77179\") " pod="openshift-must-gather-8f8q8/perf-node-gather-daemonset-mmmr4" Apr 24 15:13:07.911822 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.911606 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/978bfab9-4e01-4109-84a3-58a38fc77179-podres\") pod \"perf-node-gather-daemonset-mmmr4\" (UID: \"978bfab9-4e01-4109-84a3-58a38fc77179\") " pod="openshift-must-gather-8f8q8/perf-node-gather-daemonset-mmmr4" Apr 24 15:13:07.919124 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:07.919071 2570 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-664fq\" (UniqueName: \"kubernetes.io/projected/978bfab9-4e01-4109-84a3-58a38fc77179-kube-api-access-664fq\") pod \"perf-node-gather-daemonset-mmmr4\" (UID: \"978bfab9-4e01-4109-84a3-58a38fc77179\") " pod="openshift-must-gather-8f8q8/perf-node-gather-daemonset-mmmr4" Apr 24 15:13:08.030329 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:08.030207 2570 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8f8q8/perf-node-gather-daemonset-mmmr4" Apr 24 15:13:08.147251 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:08.147217 2570 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8f8q8/perf-node-gather-daemonset-mmmr4"] Apr 24 15:13:08.151153 ip-10-0-129-34 kubenswrapper[2570]: W0424 15:13:08.151123 2570 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod978bfab9_4e01_4109_84a3_58a38fc77179.slice/crio-f8469d053b4e32c8754484c559af5ce6d08d4a69cc27fc94788cca4598484c98 WatchSource:0}: Error finding container f8469d053b4e32c8754484c559af5ce6d08d4a69cc27fc94788cca4598484c98: Status 404 returned error can't find the container with id f8469d053b4e32c8754484c559af5ce6d08d4a69cc27fc94788cca4598484c98 Apr 24 15:13:08.152969 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:08.152944 2570 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 24 15:13:08.373821 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:08.373734 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-k7xj4_2e8d53e8-f5d3-4863-b7e1-8141078a84b3/node-ca/0.log" Apr 24 15:13:08.846035 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:08.845999 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8f8q8/perf-node-gather-daemonset-mmmr4" event={"ID":"978bfab9-4e01-4109-84a3-58a38fc77179","Type":"ContainerStarted","Data":"b9ab098a4ace9938a71f07c0474b8327cdcc8d91657a359e7a9f9a2f495049db"} Apr 24 15:13:08.846035 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:08.846033 2570 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8f8q8/perf-node-gather-daemonset-mmmr4" event={"ID":"978bfab9-4e01-4109-84a3-58a38fc77179","Type":"ContainerStarted","Data":"f8469d053b4e32c8754484c559af5ce6d08d4a69cc27fc94788cca4598484c98"} Apr 24 15:13:08.846492 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:08.846129 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-must-gather-8f8q8/perf-node-gather-daemonset-mmmr4" Apr 24 15:13:08.863445 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:08.863399 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8f8q8/perf-node-gather-daemonset-mmmr4" podStartSLOduration=1.863384161 podStartE2EDuration="1.863384161s" podCreationTimestamp="2026-04-24 15:13:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 15:13:08.861932059 +0000 UTC m=+2966.987335875" watchObservedRunningTime="2026-04-24 15:13:08.863384161 +0000 UTC m=+2966.988788015" Apr 24 15:13:09.406552 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:09.406454 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-9wh5p_fecc5f1a-59b3-4f57-9f5b-3d6977ac5a65/serve-healthcheck-canary/0.log" Apr 24 15:13:09.863499 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:09.863463 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-d5lpl_8fce4c6c-54ba-47e1-969d-6a3156568317/kube-rbac-proxy/0.log" Apr 24 15:13:09.882862 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:09.882832 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-d5lpl_8fce4c6c-54ba-47e1-969d-6a3156568317/exporter/0.log" Apr 24 15:13:09.903829 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:09.903802 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-d5lpl_8fce4c6c-54ba-47e1-969d-6a3156568317/extractor/0.log" Apr 24 15:13:11.930948 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:11.930922 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_model-serving-api-86f7b4b499-28jn6_a0eb2a61-d50e-427c-b5b2-ba208e944e93/server/0.log" Apr 24 15:13:12.215191 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:12.215115 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_seaweedfs-86cc847c5c-c8vhf_c9305c6b-7c90-4218-aac7-c0b4903e2674/seaweedfs/0.log" Apr 24 15:13:14.858967 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:14.858940 2570 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-8f8q8/perf-node-gather-daemonset-mmmr4" Apr 24 15:13:17.597146 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:17.597117 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-jvbnt_0d6bd978-a62b-4e69-9786-a9b7774d09db/kube-multus-additional-cni-plugins/0.log" Apr 24 15:13:17.620090 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:17.620061 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-jvbnt_0d6bd978-a62b-4e69-9786-a9b7774d09db/egress-router-binary-copy/0.log" Apr 24 15:13:17.647387 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:17.647366 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-jvbnt_0d6bd978-a62b-4e69-9786-a9b7774d09db/cni-plugins/0.log" Apr 24 15:13:17.673545 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:17.673524 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-jvbnt_0d6bd978-a62b-4e69-9786-a9b7774d09db/bond-cni-plugin/0.log" Apr 24 15:13:17.698486 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:17.698418 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-jvbnt_0d6bd978-a62b-4e69-9786-a9b7774d09db/routeoverride-cni/0.log" Apr 24 15:13:17.726145 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:17.726118 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-jvbnt_0d6bd978-a62b-4e69-9786-a9b7774d09db/whereabouts-cni-bincopy/0.log" Apr 24 15:13:17.749864 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:17.749842 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-jvbnt_0d6bd978-a62b-4e69-9786-a9b7774d09db/whereabouts-cni/0.log" Apr 24 15:13:17.789763 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:17.789712 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fn8gq_1b6507b4-e71a-44e1-8d03-18abcb3b225d/kube-multus/0.log" Apr 24 15:13:17.903668 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:17.903634 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-ld8rd_62277dce-4b78-4158-9951-1292c0fa443c/network-metrics-daemon/0.log" Apr 24 15:13:17.927525 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:17.927492 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-ld8rd_62277dce-4b78-4158-9951-1292c0fa443c/kube-rbac-proxy/0.log" Apr 24 15:13:18.742670 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:18.742583 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dz989_090e3afb-c111-4bf0-a107-0156c2f3a0f2/ovn-controller/0.log" Apr 24 15:13:18.758376 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:18.758345 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dz989_090e3afb-c111-4bf0-a107-0156c2f3a0f2/ovn-acl-logging/0.log" Apr 24 15:13:18.785157 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:18.785124 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dz989_090e3afb-c111-4bf0-a107-0156c2f3a0f2/ovn-acl-logging/1.log" Apr 24 15:13:18.812597 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:18.812571 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dz989_090e3afb-c111-4bf0-a107-0156c2f3a0f2/kube-rbac-proxy-node/0.log" Apr 24 15:13:18.836729 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:18.836695 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dz989_090e3afb-c111-4bf0-a107-0156c2f3a0f2/kube-rbac-proxy-ovn-metrics/0.log" Apr 24 15:13:18.855644 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:18.855614 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dz989_090e3afb-c111-4bf0-a107-0156c2f3a0f2/northd/0.log" Apr 24 15:13:18.880403 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:18.880367 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dz989_090e3afb-c111-4bf0-a107-0156c2f3a0f2/nbdb/0.log" Apr 24 15:13:18.901856 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:18.901819 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dz989_090e3afb-c111-4bf0-a107-0156c2f3a0f2/sbdb/0.log" Apr 24 15:13:19.064331 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:19.064254 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dz989_090e3afb-c111-4bf0-a107-0156c2f3a0f2/ovnkube-controller/0.log" Apr 24 15:13:20.570558 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:20.570525 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-target-7b299_cb704828-9d72-448f-8256-1dda6f6273ea/network-check-target-container/0.log" Apr 24 15:13:21.495868 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:21.495839 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_iptables-alerter-lfbtg_2bb3748f-64b2-4249-91e8-54ba5dd9c145/iptables-alerter/0.log" Apr 24 15:13:22.164217 ip-10-0-129-34 kubenswrapper[2570]: I0424 15:13:22.164183 2570 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_tuned-qd58c_d852814e-e573-4e8b-b69a-d17116e07af7/tuned/0.log"