Apr 16 17:59:47.372849 ip-10-0-128-59 systemd[1]: kubelet.service: Failed to load environment files: No such file or directory Apr 16 17:59:47.372863 ip-10-0-128-59 systemd[1]: kubelet.service: Failed to run 'start-pre' task: No such file or directory Apr 16 17:59:47.372872 ip-10-0-128-59 systemd[1]: kubelet.service: Failed with result 'resources'. Apr 16 17:59:47.373151 ip-10-0-128-59 systemd[1]: Failed to start Kubernetes Kubelet. Apr 16 17:59:57.546626 ip-10-0-128-59 systemd[1]: kubelet.service: Failed to schedule restart job: Unit crio.service not found. Apr 16 17:59:57.546652 ip-10-0-128-59 systemd[1]: kubelet.service: Failed with result 'resources'. -- Boot 7411a0fddee4418892189a69083e4a65 -- Apr 16 18:02:22.061885 ip-10-0-128-59 systemd[1]: Starting Kubernetes Kubelet... Apr 16 18:02:22.468056 ip-10-0-128-59 kubenswrapper[2571]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 18:02:22.468056 ip-10-0-128-59 kubenswrapper[2571]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Apr 16 18:02:22.468056 ip-10-0-128-59 kubenswrapper[2571]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 18:02:22.468056 ip-10-0-128-59 kubenswrapper[2571]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 18:02:22.468056 ip-10-0-128-59 kubenswrapper[2571]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 18:02:22.471212 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.471126 2571 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 18:02:22.477452 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477431 2571 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 18:02:22.477452 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477450 2571 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 18:02:22.477452 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477454 2571 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 18:02:22.477452 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477457 2571 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 18:02:22.477452 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477460 2571 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 18:02:22.477672 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477463 2571 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 18:02:22.477672 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477466 2571 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 18:02:22.477672 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477469 2571 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 18:02:22.477672 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477473 2571 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 18:02:22.477672 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477475 2571 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 18:02:22.477672 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477478 2571 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 18:02:22.477672 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477481 2571 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 18:02:22.477672 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477484 2571 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 18:02:22.477672 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477487 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 18:02:22.477672 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477489 2571 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 18:02:22.477672 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477492 2571 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 18:02:22.477672 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477495 2571 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 18:02:22.477672 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477497 2571 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 18:02:22.477672 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477500 2571 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 18:02:22.477672 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477503 2571 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 18:02:22.477672 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477506 2571 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 18:02:22.477672 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477509 2571 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 18:02:22.477672 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477511 2571 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 18:02:22.477672 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477514 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 18:02:22.477672 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477517 2571 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 18:02:22.478150 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477519 2571 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 18:02:22.478150 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477522 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 18:02:22.478150 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477525 2571 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 18:02:22.478150 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477528 2571 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 18:02:22.478150 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477531 2571 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 18:02:22.478150 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477533 2571 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 18:02:22.478150 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477536 2571 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 18:02:22.478150 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477538 2571 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 18:02:22.478150 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477541 2571 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 18:02:22.478150 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477544 2571 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 18:02:22.478150 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477546 2571 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 18:02:22.478150 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477548 2571 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 18:02:22.478150 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477551 2571 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 18:02:22.478150 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477554 2571 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 18:02:22.478150 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477557 2571 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 18:02:22.478150 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477560 2571 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 18:02:22.478150 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477563 2571 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 18:02:22.478150 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477565 2571 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 18:02:22.478150 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477568 2571 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 18:02:22.478150 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477570 2571 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 18:02:22.478687 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477574 2571 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 18:02:22.478687 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477577 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 18:02:22.478687 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477581 2571 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 18:02:22.478687 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477607 2571 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 18:02:22.478687 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477612 2571 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 18:02:22.478687 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477615 2571 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 18:02:22.478687 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477618 2571 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 18:02:22.478687 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477621 2571 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 18:02:22.478687 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477625 2571 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 18:02:22.478687 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477628 2571 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 18:02:22.478687 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477631 2571 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 18:02:22.478687 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477633 2571 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 18:02:22.478687 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477636 2571 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 18:02:22.478687 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477638 2571 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 18:02:22.478687 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477641 2571 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 18:02:22.478687 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477643 2571 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 18:02:22.478687 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477646 2571 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 18:02:22.478687 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477649 2571 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 18:02:22.478687 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477651 2571 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 18:02:22.479137 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477654 2571 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 18:02:22.479137 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477656 2571 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 18:02:22.479137 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477659 2571 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 18:02:22.479137 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477661 2571 feature_gate.go:328] unrecognized feature gate: Example Apr 16 18:02:22.479137 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477664 2571 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 18:02:22.479137 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477666 2571 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 18:02:22.479137 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477671 2571 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 18:02:22.479137 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477676 2571 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 18:02:22.479137 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477679 2571 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 18:02:22.479137 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477684 2571 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 18:02:22.479137 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477686 2571 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 18:02:22.479137 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477689 2571 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 18:02:22.479137 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477692 2571 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 18:02:22.479137 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477695 2571 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 18:02:22.479137 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477698 2571 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 18:02:22.479137 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477701 2571 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 18:02:22.479137 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477703 2571 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 18:02:22.479137 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477706 2571 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 18:02:22.479137 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477708 2571 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 18:02:22.479583 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477711 2571 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 18:02:22.479583 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477713 2571 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 18:02:22.479583 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.477715 2571 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 18:02:22.479583 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478109 2571 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 18:02:22.479583 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478114 2571 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 18:02:22.479583 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478116 2571 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 18:02:22.479583 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478119 2571 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 18:02:22.479583 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478122 2571 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 18:02:22.479583 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478124 2571 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 18:02:22.479583 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478128 2571 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 18:02:22.479583 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478130 2571 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 18:02:22.479583 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478133 2571 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 18:02:22.479583 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478136 2571 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 18:02:22.479583 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478138 2571 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 18:02:22.479583 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478141 2571 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 18:02:22.479583 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478143 2571 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 18:02:22.479583 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478146 2571 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 18:02:22.479583 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478148 2571 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 18:02:22.479583 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478151 2571 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 18:02:22.480051 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478156 2571 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 18:02:22.480051 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478159 2571 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 18:02:22.480051 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478161 2571 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 18:02:22.480051 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478164 2571 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 18:02:22.480051 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478167 2571 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 18:02:22.480051 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478170 2571 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 18:02:22.480051 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478173 2571 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 18:02:22.480051 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478176 2571 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 18:02:22.480051 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478180 2571 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 18:02:22.480051 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478182 2571 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 18:02:22.480051 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478185 2571 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 18:02:22.480051 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478187 2571 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 18:02:22.480051 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478190 2571 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 18:02:22.480051 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478193 2571 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 18:02:22.480051 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478195 2571 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 18:02:22.480051 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478198 2571 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 18:02:22.480051 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478200 2571 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 18:02:22.480051 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478204 2571 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 18:02:22.480051 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478206 2571 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 18:02:22.480051 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478209 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 18:02:22.480548 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478212 2571 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 18:02:22.480548 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478214 2571 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 18:02:22.480548 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478217 2571 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 18:02:22.480548 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478220 2571 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 18:02:22.480548 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478222 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 18:02:22.480548 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478225 2571 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 18:02:22.480548 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478228 2571 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 18:02:22.480548 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478232 2571 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 18:02:22.480548 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478235 2571 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 18:02:22.480548 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478238 2571 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 18:02:22.480548 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478241 2571 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 18:02:22.480548 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478244 2571 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 18:02:22.480548 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478247 2571 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 18:02:22.480548 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478250 2571 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 18:02:22.480548 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478252 2571 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 18:02:22.480548 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478255 2571 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 18:02:22.480548 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478258 2571 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 18:02:22.480548 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478261 2571 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 18:02:22.480548 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478264 2571 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 18:02:22.481029 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478267 2571 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 18:02:22.481029 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478269 2571 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 18:02:22.481029 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478273 2571 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 18:02:22.481029 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478275 2571 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 18:02:22.481029 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478278 2571 feature_gate.go:328] unrecognized feature gate: Example Apr 16 18:02:22.481029 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478280 2571 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 18:02:22.481029 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478283 2571 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 18:02:22.481029 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478285 2571 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 18:02:22.481029 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478288 2571 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 18:02:22.481029 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478290 2571 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 18:02:22.481029 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478293 2571 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 18:02:22.481029 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478296 2571 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 18:02:22.481029 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478298 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 18:02:22.481029 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478301 2571 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 18:02:22.481029 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478304 2571 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 18:02:22.481029 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478307 2571 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 18:02:22.481029 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478309 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 18:02:22.481029 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478313 2571 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 18:02:22.481029 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478316 2571 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 18:02:22.481029 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478319 2571 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 18:02:22.481506 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478321 2571 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 18:02:22.481506 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478324 2571 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 18:02:22.481506 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478326 2571 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 18:02:22.481506 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478329 2571 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 18:02:22.481506 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478331 2571 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 18:02:22.481506 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478334 2571 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 18:02:22.481506 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478337 2571 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 18:02:22.481506 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478340 2571 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 18:02:22.481506 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478342 2571 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 18:02:22.481506 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478345 2571 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 18:02:22.481506 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.478347 2571 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 18:02:22.481506 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.478872 2571 flags.go:64] FLAG: --address="0.0.0.0" Apr 16 18:02:22.481506 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.478882 2571 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Apr 16 18:02:22.481506 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.478889 2571 flags.go:64] FLAG: --anonymous-auth="true" Apr 16 18:02:22.481506 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.478894 2571 flags.go:64] FLAG: --application-metrics-count-limit="100" Apr 16 18:02:22.481506 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.478899 2571 flags.go:64] FLAG: --authentication-token-webhook="false" Apr 16 18:02:22.481506 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.478903 2571 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Apr 16 18:02:22.481506 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.478907 2571 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Apr 16 18:02:22.481506 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.478912 2571 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Apr 16 18:02:22.481506 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.478915 2571 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Apr 16 18:02:22.481506 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.478919 2571 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Apr 16 18:02:22.482074 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.478923 2571 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Apr 16 18:02:22.482074 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.478926 2571 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Apr 16 18:02:22.482074 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.478929 2571 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Apr 16 18:02:22.482074 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.478932 2571 flags.go:64] FLAG: --cgroup-root="" Apr 16 18:02:22.482074 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.478935 2571 flags.go:64] FLAG: --cgroups-per-qos="true" Apr 16 18:02:22.482074 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.478938 2571 flags.go:64] FLAG: --client-ca-file="" Apr 16 18:02:22.482074 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.478941 2571 flags.go:64] FLAG: --cloud-config="" Apr 16 18:02:22.482074 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.478943 2571 flags.go:64] FLAG: --cloud-provider="external" Apr 16 18:02:22.482074 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.478946 2571 flags.go:64] FLAG: --cluster-dns="[]" Apr 16 18:02:22.482074 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.478951 2571 flags.go:64] FLAG: --cluster-domain="" Apr 16 18:02:22.482074 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.478954 2571 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Apr 16 18:02:22.482074 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.478957 2571 flags.go:64] FLAG: --config-dir="" Apr 16 18:02:22.482074 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.478960 2571 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Apr 16 18:02:22.482074 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.478963 2571 flags.go:64] FLAG: --container-log-max-files="5" Apr 16 18:02:22.482074 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.478967 2571 flags.go:64] FLAG: --container-log-max-size="10Mi" Apr 16 18:02:22.482074 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.478970 2571 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Apr 16 18:02:22.482074 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.478973 2571 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Apr 16 18:02:22.482074 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.478977 2571 flags.go:64] FLAG: --containerd-namespace="k8s.io" Apr 16 18:02:22.482074 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.478980 2571 flags.go:64] FLAG: --contention-profiling="false" Apr 16 18:02:22.482074 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.478983 2571 flags.go:64] FLAG: --cpu-cfs-quota="true" Apr 16 18:02:22.482074 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.478985 2571 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Apr 16 18:02:22.482074 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.478989 2571 flags.go:64] FLAG: --cpu-manager-policy="none" Apr 16 18:02:22.482074 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.478992 2571 flags.go:64] FLAG: --cpu-manager-policy-options="" Apr 16 18:02:22.482074 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.478996 2571 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Apr 16 18:02:22.482074 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479002 2571 flags.go:64] FLAG: --enable-controller-attach-detach="true" Apr 16 18:02:22.482704 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479005 2571 flags.go:64] FLAG: --enable-debugging-handlers="true" Apr 16 18:02:22.482704 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479008 2571 flags.go:64] FLAG: --enable-load-reader="false" Apr 16 18:02:22.482704 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479014 2571 flags.go:64] FLAG: --enable-server="true" Apr 16 18:02:22.482704 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479017 2571 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Apr 16 18:02:22.482704 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479023 2571 flags.go:64] FLAG: --event-burst="100" Apr 16 18:02:22.482704 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479026 2571 flags.go:64] FLAG: --event-qps="50" Apr 16 18:02:22.482704 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479029 2571 flags.go:64] FLAG: --event-storage-age-limit="default=0" Apr 16 18:02:22.482704 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479032 2571 flags.go:64] FLAG: --event-storage-event-limit="default=0" Apr 16 18:02:22.482704 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479035 2571 flags.go:64] FLAG: --eviction-hard="" Apr 16 18:02:22.482704 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479039 2571 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Apr 16 18:02:22.482704 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479042 2571 flags.go:64] FLAG: --eviction-minimum-reclaim="" Apr 16 18:02:22.482704 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479045 2571 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Apr 16 18:02:22.482704 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479048 2571 flags.go:64] FLAG: --eviction-soft="" Apr 16 18:02:22.482704 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479051 2571 flags.go:64] FLAG: --eviction-soft-grace-period="" Apr 16 18:02:22.482704 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479053 2571 flags.go:64] FLAG: --exit-on-lock-contention="false" Apr 16 18:02:22.482704 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479056 2571 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Apr 16 18:02:22.482704 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479059 2571 flags.go:64] FLAG: --experimental-mounter-path="" Apr 16 18:02:22.482704 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479062 2571 flags.go:64] FLAG: --fail-cgroupv1="false" Apr 16 18:02:22.482704 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479065 2571 flags.go:64] FLAG: --fail-swap-on="true" Apr 16 18:02:22.482704 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479068 2571 flags.go:64] FLAG: --feature-gates="" Apr 16 18:02:22.482704 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479072 2571 flags.go:64] FLAG: --file-check-frequency="20s" Apr 16 18:02:22.482704 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479075 2571 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Apr 16 18:02:22.482704 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479079 2571 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Apr 16 18:02:22.482704 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479082 2571 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Apr 16 18:02:22.482704 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479085 2571 flags.go:64] FLAG: --healthz-port="10248" Apr 16 18:02:22.482704 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479088 2571 flags.go:64] FLAG: --help="false" Apr 16 18:02:22.483323 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479091 2571 flags.go:64] FLAG: --hostname-override="ip-10-0-128-59.ec2.internal" Apr 16 18:02:22.483323 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479094 2571 flags.go:64] FLAG: --housekeeping-interval="10s" Apr 16 18:02:22.483323 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479097 2571 flags.go:64] FLAG: --http-check-frequency="20s" Apr 16 18:02:22.483323 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479100 2571 flags.go:64] FLAG: --image-credential-provider-bin-dir="/usr/libexec/kubelet-image-credential-provider-plugins" Apr 16 18:02:22.483323 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479103 2571 flags.go:64] FLAG: --image-credential-provider-config="/etc/kubernetes/credential-providers/ecr-credential-provider.yaml" Apr 16 18:02:22.483323 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479109 2571 flags.go:64] FLAG: --image-gc-high-threshold="85" Apr 16 18:02:22.483323 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479112 2571 flags.go:64] FLAG: --image-gc-low-threshold="80" Apr 16 18:02:22.483323 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479115 2571 flags.go:64] FLAG: --image-service-endpoint="" Apr 16 18:02:22.483323 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479118 2571 flags.go:64] FLAG: --kernel-memcg-notification="false" Apr 16 18:02:22.483323 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479121 2571 flags.go:64] FLAG: --kube-api-burst="100" Apr 16 18:02:22.483323 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479124 2571 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Apr 16 18:02:22.483323 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479128 2571 flags.go:64] FLAG: --kube-api-qps="50" Apr 16 18:02:22.483323 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479131 2571 flags.go:64] FLAG: --kube-reserved="" Apr 16 18:02:22.483323 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479134 2571 flags.go:64] FLAG: --kube-reserved-cgroup="" Apr 16 18:02:22.483323 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479137 2571 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Apr 16 18:02:22.483323 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479140 2571 flags.go:64] FLAG: --kubelet-cgroups="" Apr 16 18:02:22.483323 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479143 2571 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Apr 16 18:02:22.483323 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479146 2571 flags.go:64] FLAG: --lock-file="" Apr 16 18:02:22.483323 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479149 2571 flags.go:64] FLAG: --log-cadvisor-usage="false" Apr 16 18:02:22.483323 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479152 2571 flags.go:64] FLAG: --log-flush-frequency="5s" Apr 16 18:02:22.483323 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479155 2571 flags.go:64] FLAG: --log-json-info-buffer-size="0" Apr 16 18:02:22.483323 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479161 2571 flags.go:64] FLAG: --log-json-split-stream="false" Apr 16 18:02:22.483323 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479164 2571 flags.go:64] FLAG: --log-text-info-buffer-size="0" Apr 16 18:02:22.483929 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479166 2571 flags.go:64] FLAG: --log-text-split-stream="false" Apr 16 18:02:22.483929 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479169 2571 flags.go:64] FLAG: --logging-format="text" Apr 16 18:02:22.483929 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479172 2571 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Apr 16 18:02:22.483929 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479176 2571 flags.go:64] FLAG: --make-iptables-util-chains="true" Apr 16 18:02:22.483929 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479179 2571 flags.go:64] FLAG: --manifest-url="" Apr 16 18:02:22.483929 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479182 2571 flags.go:64] FLAG: --manifest-url-header="" Apr 16 18:02:22.483929 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479186 2571 flags.go:64] FLAG: --max-housekeeping-interval="15s" Apr 16 18:02:22.483929 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479190 2571 flags.go:64] FLAG: --max-open-files="1000000" Apr 16 18:02:22.483929 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479194 2571 flags.go:64] FLAG: --max-pods="110" Apr 16 18:02:22.483929 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479197 2571 flags.go:64] FLAG: --maximum-dead-containers="-1" Apr 16 18:02:22.483929 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479200 2571 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Apr 16 18:02:22.483929 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479202 2571 flags.go:64] FLAG: --memory-manager-policy="None" Apr 16 18:02:22.483929 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479205 2571 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Apr 16 18:02:22.483929 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479208 2571 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Apr 16 18:02:22.483929 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479212 2571 flags.go:64] FLAG: --node-ip="0.0.0.0" Apr 16 18:02:22.483929 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479215 2571 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhel" Apr 16 18:02:22.483929 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479222 2571 flags.go:64] FLAG: --node-status-max-images="50" Apr 16 18:02:22.483929 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479226 2571 flags.go:64] FLAG: --node-status-update-frequency="10s" Apr 16 18:02:22.483929 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479229 2571 flags.go:64] FLAG: --oom-score-adj="-999" Apr 16 18:02:22.483929 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479232 2571 flags.go:64] FLAG: --pod-cidr="" Apr 16 18:02:22.483929 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479235 2571 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc76bab72f320de3d4105c90d73c4fb139c09e20ce0fa8dcbc0cb59920d27dec" Apr 16 18:02:22.483929 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479241 2571 flags.go:64] FLAG: --pod-manifest-path="" Apr 16 18:02:22.483929 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479244 2571 flags.go:64] FLAG: --pod-max-pids="-1" Apr 16 18:02:22.483929 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479248 2571 flags.go:64] FLAG: --pods-per-core="0" Apr 16 18:02:22.484501 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479250 2571 flags.go:64] FLAG: --port="10250" Apr 16 18:02:22.484501 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479253 2571 flags.go:64] FLAG: --protect-kernel-defaults="false" Apr 16 18:02:22.484501 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479260 2571 flags.go:64] FLAG: --provider-id="aws:///us-east-1a/i-06d645ac8f12e4f45" Apr 16 18:02:22.484501 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479263 2571 flags.go:64] FLAG: --qos-reserved="" Apr 16 18:02:22.484501 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479266 2571 flags.go:64] FLAG: --read-only-port="10255" Apr 16 18:02:22.484501 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479269 2571 flags.go:64] FLAG: --register-node="true" Apr 16 18:02:22.484501 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479272 2571 flags.go:64] FLAG: --register-schedulable="true" Apr 16 18:02:22.484501 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479275 2571 flags.go:64] FLAG: --register-with-taints="" Apr 16 18:02:22.484501 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479278 2571 flags.go:64] FLAG: --registry-burst="10" Apr 16 18:02:22.484501 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479281 2571 flags.go:64] FLAG: --registry-qps="5" Apr 16 18:02:22.484501 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479284 2571 flags.go:64] FLAG: --reserved-cpus="" Apr 16 18:02:22.484501 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479287 2571 flags.go:64] FLAG: --reserved-memory="" Apr 16 18:02:22.484501 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479290 2571 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Apr 16 18:02:22.484501 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479293 2571 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Apr 16 18:02:22.484501 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479296 2571 flags.go:64] FLAG: --rotate-certificates="false" Apr 16 18:02:22.484501 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479299 2571 flags.go:64] FLAG: --rotate-server-certificates="false" Apr 16 18:02:22.484501 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479302 2571 flags.go:64] FLAG: --runonce="false" Apr 16 18:02:22.484501 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479305 2571 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Apr 16 18:02:22.484501 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479308 2571 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Apr 16 18:02:22.484501 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479311 2571 flags.go:64] FLAG: --seccomp-default="false" Apr 16 18:02:22.484501 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479314 2571 flags.go:64] FLAG: --serialize-image-pulls="true" Apr 16 18:02:22.484501 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479316 2571 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Apr 16 18:02:22.484501 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479320 2571 flags.go:64] FLAG: --storage-driver-db="cadvisor" Apr 16 18:02:22.484501 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479323 2571 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Apr 16 18:02:22.484501 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479326 2571 flags.go:64] FLAG: --storage-driver-password="root" Apr 16 18:02:22.484501 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479329 2571 flags.go:64] FLAG: --storage-driver-secure="false" Apr 16 18:02:22.485175 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479331 2571 flags.go:64] FLAG: --storage-driver-table="stats" Apr 16 18:02:22.485175 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479334 2571 flags.go:64] FLAG: --storage-driver-user="root" Apr 16 18:02:22.485175 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479337 2571 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Apr 16 18:02:22.485175 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479340 2571 flags.go:64] FLAG: --sync-frequency="1m0s" Apr 16 18:02:22.485175 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479344 2571 flags.go:64] FLAG: --system-cgroups="" Apr 16 18:02:22.485175 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479347 2571 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Apr 16 18:02:22.485175 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479352 2571 flags.go:64] FLAG: --system-reserved-cgroup="" Apr 16 18:02:22.485175 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479355 2571 flags.go:64] FLAG: --tls-cert-file="" Apr 16 18:02:22.485175 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479359 2571 flags.go:64] FLAG: --tls-cipher-suites="[]" Apr 16 18:02:22.485175 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479364 2571 flags.go:64] FLAG: --tls-min-version="" Apr 16 18:02:22.485175 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479367 2571 flags.go:64] FLAG: --tls-private-key-file="" Apr 16 18:02:22.485175 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479370 2571 flags.go:64] FLAG: --topology-manager-policy="none" Apr 16 18:02:22.485175 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479373 2571 flags.go:64] FLAG: --topology-manager-policy-options="" Apr 16 18:02:22.485175 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479376 2571 flags.go:64] FLAG: --topology-manager-scope="container" Apr 16 18:02:22.485175 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479379 2571 flags.go:64] FLAG: --v="2" Apr 16 18:02:22.485175 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479383 2571 flags.go:64] FLAG: --version="false" Apr 16 18:02:22.485175 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479387 2571 flags.go:64] FLAG: --vmodule="" Apr 16 18:02:22.485175 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479391 2571 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Apr 16 18:02:22.485175 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.479394 2571 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Apr 16 18:02:22.485175 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479504 2571 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 18:02:22.485175 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479508 2571 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 18:02:22.485175 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479511 2571 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 18:02:22.485175 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479514 2571 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 18:02:22.485175 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479517 2571 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 18:02:22.485778 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479520 2571 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 18:02:22.485778 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479523 2571 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 18:02:22.485778 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479525 2571 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 18:02:22.485778 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479528 2571 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 18:02:22.485778 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479531 2571 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 18:02:22.485778 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479534 2571 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 18:02:22.485778 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479537 2571 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 18:02:22.485778 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479540 2571 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 18:02:22.485778 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479542 2571 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 18:02:22.485778 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479545 2571 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 18:02:22.485778 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479547 2571 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 18:02:22.485778 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479550 2571 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 18:02:22.485778 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479553 2571 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 18:02:22.485778 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479556 2571 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 18:02:22.485778 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479558 2571 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 18:02:22.485778 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479561 2571 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 18:02:22.485778 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479564 2571 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 18:02:22.485778 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479567 2571 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 18:02:22.485778 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479569 2571 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 18:02:22.485778 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479572 2571 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 18:02:22.486280 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479574 2571 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 18:02:22.486280 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479577 2571 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 18:02:22.486280 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479580 2571 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 18:02:22.486280 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479582 2571 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 18:02:22.486280 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479601 2571 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 18:02:22.486280 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479606 2571 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 18:02:22.486280 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479609 2571 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 18:02:22.486280 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479612 2571 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 18:02:22.486280 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479615 2571 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 18:02:22.486280 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479618 2571 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 18:02:22.486280 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479620 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 18:02:22.486280 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479623 2571 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 18:02:22.486280 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479625 2571 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 18:02:22.486280 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479628 2571 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 18:02:22.486280 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479631 2571 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 18:02:22.486280 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479634 2571 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 18:02:22.486280 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479637 2571 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 18:02:22.486280 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479639 2571 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 18:02:22.486280 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479642 2571 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 18:02:22.486755 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479646 2571 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 18:02:22.486755 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479649 2571 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 18:02:22.486755 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479652 2571 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 18:02:22.486755 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479655 2571 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 18:02:22.486755 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479658 2571 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 18:02:22.486755 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479662 2571 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 18:02:22.486755 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479664 2571 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 18:02:22.486755 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479667 2571 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 18:02:22.486755 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479670 2571 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 18:02:22.486755 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479674 2571 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 18:02:22.486755 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479676 2571 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 18:02:22.486755 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479679 2571 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 18:02:22.486755 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479681 2571 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 18:02:22.486755 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479684 2571 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 18:02:22.486755 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479687 2571 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 18:02:22.486755 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479689 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 18:02:22.486755 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479692 2571 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 18:02:22.486755 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479694 2571 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 18:02:22.486755 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479697 2571 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 18:02:22.487223 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479699 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 18:02:22.487223 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479702 2571 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 18:02:22.487223 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479704 2571 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 18:02:22.487223 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479707 2571 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 18:02:22.487223 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479709 2571 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 18:02:22.487223 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479712 2571 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 18:02:22.487223 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479715 2571 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 18:02:22.487223 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479717 2571 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 18:02:22.487223 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479720 2571 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 18:02:22.487223 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479723 2571 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 18:02:22.487223 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479729 2571 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 18:02:22.487223 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479733 2571 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 18:02:22.487223 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479737 2571 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 18:02:22.487223 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479739 2571 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 18:02:22.487223 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479742 2571 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 18:02:22.487223 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479745 2571 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 18:02:22.487223 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479747 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 18:02:22.487223 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479750 2571 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 18:02:22.487223 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479753 2571 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 18:02:22.487708 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479756 2571 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 18:02:22.487708 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479760 2571 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 18:02:22.487708 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479762 2571 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 18:02:22.487708 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.479766 2571 feature_gate.go:328] unrecognized feature gate: Example Apr 16 18:02:22.487708 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.480561 2571 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 16 18:02:22.487708 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.486975 2571 server.go:530] "Kubelet version" kubeletVersion="v1.33.9" Apr 16 18:02:22.487708 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.486991 2571 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 18:02:22.487708 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487040 2571 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 18:02:22.487708 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487045 2571 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 18:02:22.487708 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487048 2571 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 18:02:22.487708 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487051 2571 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 18:02:22.487708 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487054 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 18:02:22.487708 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487057 2571 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 18:02:22.487708 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487060 2571 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 18:02:22.487708 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487062 2571 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 18:02:22.487708 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487066 2571 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 18:02:22.488106 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487068 2571 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 18:02:22.488106 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487071 2571 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 18:02:22.488106 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487073 2571 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 18:02:22.488106 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487076 2571 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 18:02:22.488106 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487078 2571 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 18:02:22.488106 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487081 2571 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 18:02:22.488106 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487084 2571 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 18:02:22.488106 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487086 2571 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 18:02:22.488106 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487089 2571 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 18:02:22.488106 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487092 2571 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 18:02:22.488106 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487095 2571 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 18:02:22.488106 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487098 2571 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 18:02:22.488106 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487100 2571 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 18:02:22.488106 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487103 2571 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 18:02:22.488106 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487106 2571 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 18:02:22.488106 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487108 2571 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 18:02:22.488106 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487111 2571 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 18:02:22.488106 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487113 2571 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 18:02:22.488106 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487116 2571 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 18:02:22.488106 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487119 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 18:02:22.488642 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487121 2571 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 18:02:22.488642 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487124 2571 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 18:02:22.488642 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487128 2571 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 18:02:22.488642 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487131 2571 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 18:02:22.488642 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487134 2571 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 18:02:22.488642 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487137 2571 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 18:02:22.488642 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487139 2571 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 18:02:22.488642 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487143 2571 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 18:02:22.488642 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487147 2571 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 18:02:22.488642 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487150 2571 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 18:02:22.488642 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487153 2571 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 18:02:22.488642 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487156 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 18:02:22.488642 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487158 2571 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 18:02:22.488642 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487161 2571 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 18:02:22.488642 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487164 2571 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 18:02:22.488642 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487166 2571 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 18:02:22.488642 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487169 2571 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 18:02:22.488642 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487171 2571 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 18:02:22.488642 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487175 2571 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 18:02:22.489104 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487180 2571 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 18:02:22.489104 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487183 2571 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 18:02:22.489104 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487186 2571 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 18:02:22.489104 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487188 2571 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 18:02:22.489104 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487191 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 18:02:22.489104 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487193 2571 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 18:02:22.489104 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487196 2571 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 18:02:22.489104 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487199 2571 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 18:02:22.489104 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487201 2571 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 18:02:22.489104 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487204 2571 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 18:02:22.489104 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487206 2571 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 18:02:22.489104 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487209 2571 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 18:02:22.489104 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487211 2571 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 18:02:22.489104 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487213 2571 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 18:02:22.489104 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487216 2571 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 18:02:22.489104 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487219 2571 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 18:02:22.489104 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487222 2571 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 18:02:22.489104 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487225 2571 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 18:02:22.489104 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487228 2571 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 18:02:22.489104 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487231 2571 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 18:02:22.489577 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487234 2571 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 18:02:22.489577 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487236 2571 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 18:02:22.489577 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487239 2571 feature_gate.go:328] unrecognized feature gate: Example Apr 16 18:02:22.489577 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487241 2571 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 18:02:22.489577 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487244 2571 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 18:02:22.489577 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487246 2571 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 18:02:22.489577 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487249 2571 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 18:02:22.489577 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487251 2571 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 18:02:22.489577 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487254 2571 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 18:02:22.489577 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487256 2571 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 18:02:22.489577 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487259 2571 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 18:02:22.489577 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487261 2571 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 18:02:22.489577 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487263 2571 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 18:02:22.489577 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487266 2571 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 18:02:22.489577 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487268 2571 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 18:02:22.489577 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487271 2571 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 18:02:22.489577 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487273 2571 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 18:02:22.489577 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487276 2571 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 18:02:22.490056 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.487282 2571 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 16 18:02:22.490056 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487378 2571 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 18:02:22.490056 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487382 2571 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 18:02:22.490056 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487385 2571 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 18:02:22.490056 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487389 2571 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 18:02:22.490056 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487392 2571 feature_gate.go:328] unrecognized feature gate: Example Apr 16 18:02:22.490056 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487395 2571 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 18:02:22.490056 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487397 2571 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 18:02:22.490056 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487400 2571 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 18:02:22.490056 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487403 2571 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 18:02:22.490056 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487405 2571 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 18:02:22.490056 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487409 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 18:02:22.490056 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487411 2571 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 18:02:22.490056 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487414 2571 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 18:02:22.490056 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487417 2571 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 18:02:22.490056 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487419 2571 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 18:02:22.490454 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487422 2571 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 18:02:22.490454 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487425 2571 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 18:02:22.490454 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487427 2571 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 18:02:22.490454 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487430 2571 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 18:02:22.490454 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487432 2571 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 18:02:22.490454 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487435 2571 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 18:02:22.490454 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487438 2571 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 18:02:22.490454 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487440 2571 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 18:02:22.490454 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487443 2571 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 18:02:22.490454 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487445 2571 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 18:02:22.490454 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487448 2571 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 18:02:22.490454 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487451 2571 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 18:02:22.490454 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487454 2571 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 18:02:22.490454 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487456 2571 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 18:02:22.490454 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487459 2571 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 18:02:22.490454 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487462 2571 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 18:02:22.490454 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487465 2571 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 18:02:22.490454 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487467 2571 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 18:02:22.490454 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487470 2571 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 18:02:22.490919 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487472 2571 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 18:02:22.490919 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487475 2571 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 18:02:22.490919 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487478 2571 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 18:02:22.490919 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487480 2571 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 18:02:22.490919 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487483 2571 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 18:02:22.490919 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487486 2571 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 18:02:22.490919 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487488 2571 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 18:02:22.490919 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487491 2571 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 18:02:22.490919 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487495 2571 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 18:02:22.490919 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487505 2571 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 18:02:22.490919 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487508 2571 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 18:02:22.490919 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487511 2571 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 18:02:22.490919 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487513 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 18:02:22.490919 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487516 2571 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 18:02:22.490919 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487519 2571 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 18:02:22.490919 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487521 2571 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 18:02:22.490919 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487524 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 18:02:22.490919 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487527 2571 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 18:02:22.490919 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487529 2571 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 18:02:22.490919 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487532 2571 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 18:02:22.491403 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487535 2571 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 18:02:22.491403 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487537 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 18:02:22.491403 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487539 2571 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 18:02:22.491403 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487542 2571 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 18:02:22.491403 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487545 2571 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 18:02:22.491403 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487547 2571 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 18:02:22.491403 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487550 2571 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 18:02:22.491403 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487552 2571 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 18:02:22.491403 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487557 2571 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 18:02:22.491403 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487561 2571 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 18:02:22.491403 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487564 2571 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 18:02:22.491403 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487566 2571 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 18:02:22.491403 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487569 2571 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 18:02:22.491403 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487572 2571 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 18:02:22.491403 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487575 2571 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 18:02:22.491403 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487577 2571 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 18:02:22.491403 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487579 2571 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 18:02:22.491403 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487582 2571 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 18:02:22.491403 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487584 2571 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 18:02:22.491911 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487600 2571 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 18:02:22.491911 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487602 2571 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 18:02:22.491911 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487605 2571 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 18:02:22.491911 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487608 2571 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 18:02:22.491911 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487611 2571 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 18:02:22.491911 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487613 2571 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 18:02:22.491911 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487616 2571 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 18:02:22.491911 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487618 2571 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 18:02:22.491911 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487621 2571 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 18:02:22.491911 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487623 2571 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 18:02:22.491911 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487626 2571 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 18:02:22.491911 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487629 2571 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 18:02:22.491911 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:22.487631 2571 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 18:02:22.491911 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.487636 2571 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 16 18:02:22.491911 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.488424 2571 server.go:962] "Client rotation is on, will bootstrap in background" Apr 16 18:02:22.492294 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.491422 2571 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Apr 16 18:02:22.492294 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.492242 2571 server.go:1019] "Starting client certificate rotation" Apr 16 18:02:22.492348 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.492335 2571 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 16 18:02:22.492378 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.492370 2571 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 16 18:02:22.514896 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.514873 2571 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 16 18:02:22.517298 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.517266 2571 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 16 18:02:22.531225 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.531205 2571 log.go:25] "Validated CRI v1 runtime API" Apr 16 18:02:22.537013 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.536995 2571 log.go:25] "Validated CRI v1 image API" Apr 16 18:02:22.538407 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.538385 2571 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 18:02:22.541167 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.541146 2571 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/nvme0n1p2 c439a574-ba50-4fbd-bf3b-3ede23a5f5c9:/dev/nvme0n1p4 cf5fa687-63ae-46bd-9d1d-0d52596abeb3:/dev/nvme0n1p3] Apr 16 18:02:22.541224 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.541167 2571 fs.go:136] Filesystem partitions: map[/dev/nvme0n1p3:{mountpoint:/boot major:259 minor:3 fsType:ext4 blockSize:0} /dev/nvme0n1p4:{mountpoint:/var major:259 minor:4 fsType:xfs blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Apr 16 18:02:22.546645 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.546512 2571 manager.go:217] Machine: {Timestamp:2026-04-16 18:02:22.544906984 +0000 UTC m=+0.373870107 CPUVendorID:GenuineIntel NumCores:8 NumPhysicalCores:4 NumSockets:1 CpuFrequency:3102523 MemoryCapacity:33164496896 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ec2b8111c93743c1c67738a43beb0dc7 SystemUUID:ec2b8111-c937-43c1-c677-38a43beb0dc7 BootID:7411a0fd-dee4-4188-9218-9a69083e4a65 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16582250496 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/nvme0n1p3 DeviceMajor:259 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16582246400 Type:vfs Inodes:4048400 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6632902656 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/nvme0n1p4 DeviceMajor:259 DeviceMinor:4 Capacity:128243970048 Type:vfs Inodes:62651840 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6098944 Type:vfs Inodes:18446744073709551615 HasInodes:true}] DiskMap:map[259:0:{Name:nvme0n1 Major:259 Minor:0 Size:128849018880 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:02:9c:ae:e1:20:2d Speed:0 Mtu:9001} {Name:ens5 MacAddress:02:9c:ae:e1:20:2d Speed:0 Mtu:9001} {Name:ovs-system MacAddress:26:b3:24:a1:b7:7a Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33164496896 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0 4] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:1 Threads:[1 5] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:2 Threads:[2 6] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:3 Threads:[3 7] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:}] Caches:[{Id:0 Size:37486592 Type:Unified Level:3}] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Apr 16 18:02:22.546645 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.546633 2571 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Apr 16 18:02:22.546784 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.546716 2571 manager.go:233] Version: {KernelVersion:5.14.0-570.104.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20260401-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Apr 16 18:02:22.549194 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.549166 2571 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 18:02:22.549338 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.549196 2571 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-10-0-128-59.ec2.internal","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 18:02:22.549388 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.549345 2571 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 18:02:22.549388 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.549354 2571 container_manager_linux.go:306] "Creating device plugin manager" Apr 16 18:02:22.549388 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.549367 2571 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 16 18:02:22.550130 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.550117 2571 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 16 18:02:22.551231 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.551221 2571 state_mem.go:36] "Initialized new in-memory state store" Apr 16 18:02:22.551335 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.551326 2571 server.go:1267] "Using root directory" path="/var/lib/kubelet" Apr 16 18:02:22.553332 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.553320 2571 kubelet.go:491] "Attempting to sync node with API server" Apr 16 18:02:22.553374 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.553344 2571 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 18:02:22.553374 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.553363 2571 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Apr 16 18:02:22.553374 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.553374 2571 kubelet.go:397] "Adding apiserver pod source" Apr 16 18:02:22.553468 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.553390 2571 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 18:02:22.554234 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.554212 2571 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 16 18:02:22.554415 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.554403 2571 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 16 18:02:22.554464 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.554423 2571 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 16 18:02:22.557628 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.557542 2571 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.10-2.rhaos4.20.gita4d0894.el9" apiVersion="v1" Apr 16 18:02:22.559996 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.559979 2571 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 18:02:22.561650 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.561638 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Apr 16 18:02:22.561712 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.561654 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Apr 16 18:02:22.561712 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.561661 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Apr 16 18:02:22.561712 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.561666 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Apr 16 18:02:22.561712 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.561671 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Apr 16 18:02:22.561712 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.561678 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Apr 16 18:02:22.561712 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.561684 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Apr 16 18:02:22.561712 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.561690 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Apr 16 18:02:22.561712 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.561697 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Apr 16 18:02:22.561712 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.561703 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Apr 16 18:02:22.561712 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.561711 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Apr 16 18:02:22.561966 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.561719 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Apr 16 18:02:22.562476 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.562466 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Apr 16 18:02:22.562510 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.562480 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Apr 16 18:02:22.566097 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.566083 2571 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 16 18:02:22.566172 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.566117 2571 server.go:1295] "Started kubelet" Apr 16 18:02:22.566248 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.566206 2571 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 18:02:22.566286 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.566211 2571 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 18:02:22.566286 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.566275 2571 server_v1.go:47] "podresources" method="list" useActivePods=true Apr 16 18:02:22.566976 ip-10-0-128-59 systemd[1]: Started Kubernetes Kubelet. Apr 16 18:02:22.567530 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.567476 2571 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 18:02:22.569016 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.569000 2571 server.go:317] "Adding debug handlers to kubelet server" Apr 16 18:02:22.571355 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.571335 2571 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-128-59.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 16 18:02:22.571355 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:22.571334 2571 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-128-59.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 18:02:22.571469 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:22.571338 2571 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 18:02:22.574378 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:22.574359 2571 kubelet.go:1618] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Apr 16 18:02:22.574559 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.574531 2571 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Apr 16 18:02:22.575114 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.575094 2571 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 18:02:22.575694 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.575679 2571 factory.go:55] Registering systemd factory Apr 16 18:02:22.575767 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.575697 2571 factory.go:223] Registration of the systemd container factory successfully Apr 16 18:02:22.575818 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:22.575782 2571 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-128-59.ec2.internal\" not found" Apr 16 18:02:22.575884 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.575875 2571 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 16 18:02:22.575926 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.575886 2571 factory.go:153] Registering CRI-O factory Apr 16 18:02:22.575926 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.575897 2571 factory.go:223] Registration of the crio container factory successfully Apr 16 18:02:22.575994 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.575878 2571 volume_manager.go:295] "The desired_state_of_world populator starts" Apr 16 18:02:22.575994 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.575956 2571 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 16 18:02:22.575994 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.575957 2571 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Apr 16 18:02:22.575994 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.575977 2571 factory.go:103] Registering Raw factory Apr 16 18:02:22.575994 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.575992 2571 manager.go:1196] Started watching for new ooms in manager Apr 16 18:02:22.576178 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.576039 2571 reconstruct.go:97] "Volume reconstruction finished" Apr 16 18:02:22.576178 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.576047 2571 reconciler.go:26] "Reconciler: start to sync state" Apr 16 18:02:22.576507 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.576494 2571 manager.go:319] Starting recovery of all containers Apr 16 18:02:22.579267 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:22.579243 2571 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 18:02:22.580074 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:22.580031 2571 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-128-59.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Apr 16 18:02:22.580360 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:22.579306 2571 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-128-59.ec2.internal.18a6e85225f6382f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-128-59.ec2.internal,UID:ip-10-0-128-59.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-10-0-128-59.ec2.internal,},FirstTimestamp:2026-04-16 18:02:22.566094895 +0000 UTC m=+0.395058019,LastTimestamp:2026-04-16 18:02:22.566094895 +0000 UTC m=+0.395058019,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-128-59.ec2.internal,}" Apr 16 18:02:22.588123 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.587969 2571 manager.go:324] Recovery completed Apr 16 18:02:22.593643 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.593548 2571 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 18:02:22.595707 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.595684 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-128-59.ec2.internal" event="NodeHasSufficientMemory" Apr 16 18:02:22.595765 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.595720 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-128-59.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 18:02:22.595765 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.595734 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-128-59.ec2.internal" event="NodeHasSufficientPID" Apr 16 18:02:22.596232 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.596219 2571 cpu_manager.go:222] "Starting CPU manager" policy="none" Apr 16 18:02:22.596272 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.596234 2571 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Apr 16 18:02:22.596272 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.596250 2571 state_mem.go:36] "Initialized new in-memory state store" Apr 16 18:02:22.598889 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.598877 2571 policy_none.go:49] "None policy: Start" Apr 16 18:02:22.598936 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.598894 2571 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 16 18:02:22.598936 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.598904 2571 state_mem.go:35] "Initializing new in-memory state store" Apr 16 18:02:22.599636 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:22.599559 2571 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-128-59.ec2.internal.18a6e85227ba0ffd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-128-59.ec2.internal,UID:ip-10-0-128-59.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-128-59.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-128-59.ec2.internal,},FirstTimestamp:2026-04-16 18:02:22.595706877 +0000 UTC m=+0.424670000,LastTimestamp:2026-04-16 18:02:22.595706877 +0000 UTC m=+0.424670000,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-128-59.ec2.internal,}" Apr 16 18:02:22.605951 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.605935 2571 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-5bpw7" Apr 16 18:02:22.611442 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:22.611360 2571 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-128-59.ec2.internal.18a6e85227ba6106 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-128-59.ec2.internal,UID:ip-10-0-128-59.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-128-59.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-128-59.ec2.internal,},FirstTimestamp:2026-04-16 18:02:22.595727622 +0000 UTC m=+0.424690747,LastTimestamp:2026-04-16 18:02:22.595727622 +0000 UTC m=+0.424690747,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-128-59.ec2.internal,}" Apr 16 18:02:22.621245 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.621223 2571 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-5bpw7" Apr 16 18:02:22.634213 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.634199 2571 manager.go:341] "Starting Device Plugin manager" Apr 16 18:02:22.646426 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:22.634228 2571 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 18:02:22.646426 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.634238 2571 server.go:85] "Starting device plugin registration server" Apr 16 18:02:22.646426 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.634519 2571 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 18:02:22.646426 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.634528 2571 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 18:02:22.646426 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.634604 2571 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Apr 16 18:02:22.646426 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.634693 2571 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Apr 16 18:02:22.646426 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.634704 2571 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 18:02:22.646426 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:22.635172 2571 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Apr 16 18:02:22.646426 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:22.635212 2571 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-128-59.ec2.internal\" not found" Apr 16 18:02:22.708534 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.708504 2571 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 16 18:02:22.709747 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.709719 2571 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 16 18:02:22.709859 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.709756 2571 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 16 18:02:22.709859 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.709780 2571 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 18:02:22.709859 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.709795 2571 kubelet.go:2451] "Starting kubelet main sync loop" Apr 16 18:02:22.709859 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:22.709836 2571 kubelet.go:2475] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Apr 16 18:02:22.713199 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.713179 2571 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 18:02:22.735576 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.735533 2571 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 18:02:22.736544 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.736530 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-128-59.ec2.internal" event="NodeHasSufficientMemory" Apr 16 18:02:22.736638 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.736558 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-128-59.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 18:02:22.736638 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.736568 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-128-59.ec2.internal" event="NodeHasSufficientPID" Apr 16 18:02:22.736638 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.736607 2571 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-128-59.ec2.internal" Apr 16 18:02:22.745025 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.745010 2571 kubelet_node_status.go:81] "Successfully registered node" node="ip-10-0-128-59.ec2.internal" Apr 16 18:02:22.745082 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:22.745031 2571 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-128-59.ec2.internal\": node \"ip-10-0-128-59.ec2.internal\" not found" Apr 16 18:02:22.761251 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:22.761234 2571 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-128-59.ec2.internal\" not found" Apr 16 18:02:22.810802 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.810777 2571 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-128-59.ec2.internal","kube-system/kube-apiserver-proxy-ip-10-0-128-59.ec2.internal"] Apr 16 18:02:22.810875 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.810851 2571 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 18:02:22.812294 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.812280 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-128-59.ec2.internal" event="NodeHasSufficientMemory" Apr 16 18:02:22.812364 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.812308 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-128-59.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 18:02:22.812364 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.812321 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-128-59.ec2.internal" event="NodeHasSufficientPID" Apr 16 18:02:22.813900 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.813888 2571 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 18:02:22.814028 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.814015 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-128-59.ec2.internal" Apr 16 18:02:22.814078 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.814042 2571 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 18:02:22.814569 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.814556 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-128-59.ec2.internal" event="NodeHasSufficientMemory" Apr 16 18:02:22.814661 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.814581 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-128-59.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 18:02:22.814661 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.814556 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-128-59.ec2.internal" event="NodeHasSufficientMemory" Apr 16 18:02:22.814661 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.814607 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-128-59.ec2.internal" event="NodeHasSufficientPID" Apr 16 18:02:22.814661 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.814627 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-128-59.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 18:02:22.814661 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.814646 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-128-59.ec2.internal" event="NodeHasSufficientPID" Apr 16 18:02:22.816499 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.816484 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-128-59.ec2.internal" Apr 16 18:02:22.816579 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.816508 2571 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 18:02:22.817138 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.817122 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-128-59.ec2.internal" event="NodeHasSufficientMemory" Apr 16 18:02:22.817229 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.817146 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-128-59.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 18:02:22.817229 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.817156 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-128-59.ec2.internal" event="NodeHasSufficientPID" Apr 16 18:02:22.840155 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:22.840136 2571 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-128-59.ec2.internal\" not found" node="ip-10-0-128-59.ec2.internal" Apr 16 18:02:22.844457 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:22.844440 2571 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-128-59.ec2.internal\" not found" node="ip-10-0-128-59.ec2.internal" Apr 16 18:02:22.862158 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:22.862140 2571 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-128-59.ec2.internal\" not found" Apr 16 18:02:22.877458 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.877431 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d7cc70a393babc6302c6cd1539b79c3b-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-128-59.ec2.internal\" (UID: \"d7cc70a393babc6302c6cd1539b79c3b\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-128-59.ec2.internal" Apr 16 18:02:22.877576 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.877460 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d7cc70a393babc6302c6cd1539b79c3b-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-128-59.ec2.internal\" (UID: \"d7cc70a393babc6302c6cd1539b79c3b\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-128-59.ec2.internal" Apr 16 18:02:22.877576 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.877487 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/b23767064b3798a2c7472b4227e16d3a-config\") pod \"kube-apiserver-proxy-ip-10-0-128-59.ec2.internal\" (UID: \"b23767064b3798a2c7472b4227e16d3a\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-128-59.ec2.internal" Apr 16 18:02:22.962423 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:22.962396 2571 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-128-59.ec2.internal\" not found" Apr 16 18:02:22.978358 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.978334 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d7cc70a393babc6302c6cd1539b79c3b-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-128-59.ec2.internal\" (UID: \"d7cc70a393babc6302c6cd1539b79c3b\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-128-59.ec2.internal" Apr 16 18:02:22.978472 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.978363 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/b23767064b3798a2c7472b4227e16d3a-config\") pod \"kube-apiserver-proxy-ip-10-0-128-59.ec2.internal\" (UID: \"b23767064b3798a2c7472b4227e16d3a\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-128-59.ec2.internal" Apr 16 18:02:22.978472 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.978379 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d7cc70a393babc6302c6cd1539b79c3b-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-128-59.ec2.internal\" (UID: \"d7cc70a393babc6302c6cd1539b79c3b\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-128-59.ec2.internal" Apr 16 18:02:22.978472 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.978430 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/b23767064b3798a2c7472b4227e16d3a-config\") pod \"kube-apiserver-proxy-ip-10-0-128-59.ec2.internal\" (UID: \"b23767064b3798a2c7472b4227e16d3a\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-128-59.ec2.internal" Apr 16 18:02:22.978472 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.978433 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d7cc70a393babc6302c6cd1539b79c3b-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-128-59.ec2.internal\" (UID: \"d7cc70a393babc6302c6cd1539b79c3b\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-128-59.ec2.internal" Apr 16 18:02:22.978472 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:22.978468 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d7cc70a393babc6302c6cd1539b79c3b-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-128-59.ec2.internal\" (UID: \"d7cc70a393babc6302c6cd1539b79c3b\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-128-59.ec2.internal" Apr 16 18:02:23.062867 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:23.062794 2571 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-128-59.ec2.internal\" not found" Apr 16 18:02:23.144294 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.144266 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-128-59.ec2.internal" Apr 16 18:02:23.148072 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.148054 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-128-59.ec2.internal" Apr 16 18:02:23.163741 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:23.163711 2571 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-128-59.ec2.internal\" not found" Apr 16 18:02:23.264307 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:23.264260 2571 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-128-59.ec2.internal\" not found" Apr 16 18:02:23.364767 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:23.364700 2571 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-128-59.ec2.internal\" not found" Apr 16 18:02:23.459817 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.459791 2571 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 18:02:23.475999 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.475977 2571 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-128-59.ec2.internal" Apr 16 18:02:23.487303 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.487280 2571 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 16 18:02:23.488676 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.488663 2571 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-128-59.ec2.internal" Apr 16 18:02:23.492712 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.492695 2571 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Apr 16 18:02:23.492829 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.492795 2571 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Apr 16 18:02:23.492870 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.492849 2571 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Apr 16 18:02:23.492908 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:23.492877 2571 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://a8e25f99a0b474cb696c6c3dd69f9c89-baab7f4883e282aa.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/kube-system/pods\": read tcp 10.0.128.59:44274->54.198.54.146:6443: use of closed network connection" pod="kube-system/kube-apiserver-proxy-ip-10-0-128-59.ec2.internal" Apr 16 18:02:23.535709 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.535681 2571 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 18:02:23.553663 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.553640 2571 apiserver.go:52] "Watching apiserver" Apr 16 18:02:23.558183 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.558163 2571 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Apr 16 18:02:23.559656 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.559617 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-128-59.ec2.internal","openshift-multus/multus-additional-cni-plugins-8cd6l","openshift-multus/multus-dztd6","openshift-network-diagnostics/network-check-target-9b2bn","openshift-ovn-kubernetes/ovnkube-node-tlgs4","openshift-cluster-node-tuning-operator/tuned-sqf5x","openshift-dns/node-resolver-82kk2","openshift-image-registry/node-ca-5pm7k","openshift-multus/network-metrics-daemon-tv8pg","openshift-network-operator/iptables-alerter-jw52k","kube-system/konnectivity-agent-9s2wr","kube-system/kube-apiserver-proxy-ip-10-0-128-59.ec2.internal","openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-fprmr"] Apr 16 18:02:23.562139 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.562116 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-82kk2" Apr 16 18:02:23.563323 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.563298 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-8cd6l" Apr 16 18:02:23.565640 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.565619 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Apr 16 18:02:23.565738 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.565653 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Apr 16 18:02:23.565738 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.565714 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.565838 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.565758 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-nqx78\"" Apr 16 18:02:23.565838 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.565791 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9b2bn" Apr 16 18:02:23.565941 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:23.565854 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9b2bn" podUID="3d6007c8-6817-406b-894d-8f5fefd81911" Apr 16 18:02:23.567674 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.567447 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Apr 16 18:02:23.567674 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.567449 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Apr 16 18:02:23.567674 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.567454 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.567967 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.567952 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Apr 16 18:02:23.568040 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.568023 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Apr 16 18:02:23.568088 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.568060 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Apr 16 18:02:23.568146 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.568074 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-r9z45\"" Apr 16 18:02:23.568146 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.568087 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Apr 16 18:02:23.568245 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.568155 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-x6sn6\"" Apr 16 18:02:23.569151 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.569135 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.569492 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.569464 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-cj8nh\"" Apr 16 18:02:23.569585 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.569541 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Apr 16 18:02:23.570096 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.570076 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Apr 16 18:02:23.570191 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.570127 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Apr 16 18:02:23.570504 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.570481 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Apr 16 18:02:23.570641 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.570575 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Apr 16 18:02:23.570641 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.570579 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-5pm7k" Apr 16 18:02:23.570641 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.570618 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Apr 16 18:02:23.571035 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.571021 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"tuned-dockercfg-kbls5\"" Apr 16 18:02:23.571290 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.571275 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"kube-root-ca.crt\"" Apr 16 18:02:23.571559 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.571545 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"openshift-service-ca.crt\"" Apr 16 18:02:23.573009 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.572988 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Apr 16 18:02:23.573254 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.573240 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-gws24\"" Apr 16 18:02:23.573431 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.573416 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Apr 16 18:02:23.573533 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.573416 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Apr 16 18:02:23.573865 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.573845 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv8pg" Apr 16 18:02:23.573960 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:23.573908 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv8pg" podUID="28103df6-de37-4b7f-b3e8-6ef03a0d1cfe" Apr 16 18:02:23.574673 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.574655 2571 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Apr 16 18:02:23.575092 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.575075 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-jw52k" Apr 16 18:02:23.576668 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.576649 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-9s2wr" Apr 16 18:02:23.576761 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.576694 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Apr 16 18:02:23.576873 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.576844 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Apr 16 18:02:23.577260 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.577245 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Apr 16 18:02:23.577336 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.577259 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-dockercfg-xs8hp\"" Apr 16 18:02:23.578564 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.578304 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-fprmr" Apr 16 18:02:23.578564 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.578500 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"default-dockercfg-2wzvj\"" Apr 16 18:02:23.578748 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.578564 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kube-system\"/\"konnectivity-ca-bundle\"" Apr 16 18:02:23.578843 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.578827 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"konnectivity-agent\"" Apr 16 18:02:23.580370 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.580351 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"kube-root-ca.crt\"" Apr 16 18:02:23.580481 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.580389 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-metrics-serving-cert\"" Apr 16 18:02:23.581122 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.581105 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-sa-dockercfg-29h8d\"" Apr 16 18:02:23.581307 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.581293 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"openshift-service-ca.crt\"" Apr 16 18:02:23.582398 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.582380 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dfe6a446-b0d0-4f14-ab3f-bc468e461320-cni-binary-copy\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.582480 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.582402 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-etc-kubernetes\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.582480 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.582420 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/433f58f3-2b64-4ade-a6d2-2016a24672b3-hosts-file\") pod \"node-resolver-82kk2\" (UID: \"433f58f3-2b64-4ade-a6d2-2016a24672b3\") " pod="openshift-dns/node-resolver-82kk2" Apr 16 18:02:23.582570 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.582491 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-env-overrides\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.582570 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.582518 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/a0c27f7e-2a1f-463d-bcfa-0c3f56beb85f-konnectivity-ca\") pod \"konnectivity-agent-9s2wr\" (UID: \"a0c27f7e-2a1f-463d-bcfa-0c3f56beb85f\") " pod="kube-system/konnectivity-agent-9s2wr" Apr 16 18:02:23.582570 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.582554 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9rvc\" (UniqueName: \"kubernetes.io/projected/3d6007c8-6817-406b-894d-8f5fefd81911-kube-api-access-z9rvc\") pod \"network-check-target-9b2bn\" (UID: \"3d6007c8-6817-406b-894d-8f5fefd81911\") " pod="openshift-network-diagnostics/network-check-target-9b2bn" Apr 16 18:02:23.582726 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.582570 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-ovnkube-config\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.582726 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.582614 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/529cdc35-2ba8-48a7-8e7c-fefbb7c00f18-cnibin\") pod \"multus-additional-cni-plugins-8cd6l\" (UID: \"529cdc35-2ba8-48a7-8e7c-fefbb7c00f18\") " pod="openshift-multus/multus-additional-cni-plugins-8cd6l" Apr 16 18:02:23.582726 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.582640 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/529cdc35-2ba8-48a7-8e7c-fefbb7c00f18-os-release\") pod \"multus-additional-cni-plugins-8cd6l\" (UID: \"529cdc35-2ba8-48a7-8e7c-fefbb7c00f18\") " pod="openshift-multus/multus-additional-cni-plugins-8cd6l" Apr 16 18:02:23.582726 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.582658 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-node-log\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.582726 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.582671 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/208509f5-55d0-46f5-aebb-6497d6c3fa14-etc-systemd\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.582726 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.582684 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/208509f5-55d0-46f5-aebb-6497d6c3fa14-sys\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.582971 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.582737 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/96576415-5af9-4a8f-a718-72470bf1a7d9-iptables-alerter-script\") pod \"iptables-alerter-jw52k\" (UID: \"96576415-5af9-4a8f-a718-72470bf1a7d9\") " pod="openshift-network-operator/iptables-alerter-jw52k" Apr 16 18:02:23.582971 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.582856 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gntgq\" (UniqueName: \"kubernetes.io/projected/529cdc35-2ba8-48a7-8e7c-fefbb7c00f18-kube-api-access-gntgq\") pod \"multus-additional-cni-plugins-8cd6l\" (UID: \"529cdc35-2ba8-48a7-8e7c-fefbb7c00f18\") " pod="openshift-multus/multus-additional-cni-plugins-8cd6l" Apr 16 18:02:23.582971 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.582883 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-run-systemd\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.582971 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.582918 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-var-lib-openvswitch\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.582971 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.582942 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/208509f5-55d0-46f5-aebb-6497d6c3fa14-etc-sysctl-conf\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.582971 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.582966 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2zgg\" (UniqueName: \"kubernetes.io/projected/443d2e7a-08b9-4fa3-b1de-3c569b5764fd-kube-api-access-d2zgg\") pod \"node-ca-5pm7k\" (UID: \"443d2e7a-08b9-4fa3-b1de-3c569b5764fd\") " pod="openshift-image-registry/node-ca-5pm7k" Apr 16 18:02:23.583267 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.582988 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/529cdc35-2ba8-48a7-8e7c-fefbb7c00f18-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8cd6l\" (UID: \"529cdc35-2ba8-48a7-8e7c-fefbb7c00f18\") " pod="openshift-multus/multus-additional-cni-plugins-8cd6l" Apr 16 18:02:23.583267 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.583012 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/208509f5-55d0-46f5-aebb-6497d6c3fa14-etc-sysctl-d\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.583516 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.583490 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/96576415-5af9-4a8f-a718-72470bf1a7d9-host-slash\") pod \"iptables-alerter-jw52k\" (UID: \"96576415-5af9-4a8f-a718-72470bf1a7d9\") " pod="openshift-network-operator/iptables-alerter-jw52k" Apr 16 18:02:23.583837 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.583814 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/433f58f3-2b64-4ade-a6d2-2016a24672b3-tmp-dir\") pod \"node-resolver-82kk2\" (UID: \"433f58f3-2b64-4ade-a6d2-2016a24672b3\") " pod="openshift-dns/node-resolver-82kk2" Apr 16 18:02:23.583944 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.583870 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/208509f5-55d0-46f5-aebb-6497d6c3fa14-tmp\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.583944 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.583912 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/dfe6a446-b0d0-4f14-ab3f-bc468e461320-multus-daemon-config\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.584048 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.583978 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-host-run-multus-certs\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.584048 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.584020 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/529cdc35-2ba8-48a7-8e7c-fefbb7c00f18-system-cni-dir\") pod \"multus-additional-cni-plugins-8cd6l\" (UID: \"529cdc35-2ba8-48a7-8e7c-fefbb7c00f18\") " pod="openshift-multus/multus-additional-cni-plugins-8cd6l" Apr 16 18:02:23.584163 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.584054 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-run-ovn\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.584163 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.584122 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-host-run-ovn-kubernetes\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.584258 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.584154 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-os-release\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.584258 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.584195 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-multus-conf-dir\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.584258 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.584222 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-systemd-units\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.584258 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.584253 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28103df6-de37-4b7f-b3e8-6ef03a0d1cfe-metrics-certs\") pod \"network-metrics-daemon-tv8pg\" (UID: \"28103df6-de37-4b7f-b3e8-6ef03a0d1cfe\") " pod="openshift-multus/network-metrics-daemon-tv8pg" Apr 16 18:02:23.584436 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.584284 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-multus-socket-dir-parent\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.584436 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.584316 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/529cdc35-2ba8-48a7-8e7c-fefbb7c00f18-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8cd6l\" (UID: \"529cdc35-2ba8-48a7-8e7c-fefbb7c00f18\") " pod="openshift-multus/multus-additional-cni-plugins-8cd6l" Apr 16 18:02:23.584436 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.584347 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-host-run-netns\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.584436 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.584371 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-host-cni-bin\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.584631 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.584441 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/208509f5-55d0-46f5-aebb-6497d6c3fa14-host\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.584631 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.584486 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/208509f5-55d0-46f5-aebb-6497d6c3fa14-etc-tuned\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.584631 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.584534 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-multus-cni-dir\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.584631 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.584577 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-host-var-lib-cni-multus\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.584631 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.584633 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-hostroot\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.585019 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.584665 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-host-slash\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.585019 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.584690 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-ovn-node-metrics-cert\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.585019 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.584786 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-host-var-lib-cni-bin\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.585019 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.584867 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.585019 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.584900 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/529cdc35-2ba8-48a7-8e7c-fefbb7c00f18-cni-binary-copy\") pod \"multus-additional-cni-plugins-8cd6l\" (UID: \"529cdc35-2ba8-48a7-8e7c-fefbb7c00f18\") " pod="openshift-multus/multus-additional-cni-plugins-8cd6l" Apr 16 18:02:23.585019 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.584958 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/208509f5-55d0-46f5-aebb-6497d6c3fa14-etc-modprobe-d\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.585019 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.585017 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/208509f5-55d0-46f5-aebb-6497d6c3fa14-var-lib-kubelet\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.585336 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.585091 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-host-run-netns\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.585336 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.585185 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-log-socket\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.585336 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.585247 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-host-cni-netd\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.585336 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.585275 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/208509f5-55d0-46f5-aebb-6497d6c3fa14-run\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.585516 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.585386 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzf6h\" (UniqueName: \"kubernetes.io/projected/28103df6-de37-4b7f-b3e8-6ef03a0d1cfe-kube-api-access-fzf6h\") pod \"network-metrics-daemon-tv8pg\" (UID: \"28103df6-de37-4b7f-b3e8-6ef03a0d1cfe\") " pod="openshift-multus/network-metrics-daemon-tv8pg" Apr 16 18:02:23.585516 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.585438 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/443d2e7a-08b9-4fa3-b1de-3c569b5764fd-serviceca\") pod \"node-ca-5pm7k\" (UID: \"443d2e7a-08b9-4fa3-b1de-3c569b5764fd\") " pod="openshift-image-registry/node-ca-5pm7k" Apr 16 18:02:23.585516 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.585474 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-etc-openvswitch\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.585516 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.585504 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-ovnkube-script-lib\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.585710 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.585553 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r65t2\" (UniqueName: \"kubernetes.io/projected/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-kube-api-access-r65t2\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.585710 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.585584 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/208509f5-55d0-46f5-aebb-6497d6c3fa14-etc-sysconfig\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.585710 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.585650 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-cnibin\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.585710 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.585678 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/443d2e7a-08b9-4fa3-b1de-3c569b5764fd-host\") pod \"node-ca-5pm7k\" (UID: \"443d2e7a-08b9-4fa3-b1de-3c569b5764fd\") " pod="openshift-image-registry/node-ca-5pm7k" Apr 16 18:02:23.585878 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.585706 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/529cdc35-2ba8-48a7-8e7c-fefbb7c00f18-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-8cd6l\" (UID: \"529cdc35-2ba8-48a7-8e7c-fefbb7c00f18\") " pod="openshift-multus/multus-additional-cni-plugins-8cd6l" Apr 16 18:02:23.585878 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.585738 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-run-openvswitch\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.585878 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.585795 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/208509f5-55d0-46f5-aebb-6497d6c3fa14-etc-kubernetes\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.585878 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.585849 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-host-var-lib-kubelet\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.586048 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.585909 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhgs8\" (UniqueName: \"kubernetes.io/projected/dfe6a446-b0d0-4f14-ab3f-bc468e461320-kube-api-access-fhgs8\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.586048 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.585973 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-host-kubelet\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.586139 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.586053 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/208509f5-55d0-46f5-aebb-6497d6c3fa14-lib-modules\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.586139 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.586126 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4txqv\" (UniqueName: \"kubernetes.io/projected/208509f5-55d0-46f5-aebb-6497d6c3fa14-kube-api-access-4txqv\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.586227 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.586160 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7d6t\" (UniqueName: \"kubernetes.io/projected/96576415-5af9-4a8f-a718-72470bf1a7d9-kube-api-access-d7d6t\") pod \"iptables-alerter-jw52k\" (UID: \"96576415-5af9-4a8f-a718-72470bf1a7d9\") " pod="openshift-network-operator/iptables-alerter-jw52k" Apr 16 18:02:23.586227 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.586203 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/a0c27f7e-2a1f-463d-bcfa-0c3f56beb85f-agent-certs\") pod \"konnectivity-agent-9s2wr\" (UID: \"a0c27f7e-2a1f-463d-bcfa-0c3f56beb85f\") " pod="kube-system/konnectivity-agent-9s2wr" Apr 16 18:02:23.586315 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.586268 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp8cb\" (UniqueName: \"kubernetes.io/projected/433f58f3-2b64-4ade-a6d2-2016a24672b3-kube-api-access-gp8cb\") pod \"node-resolver-82kk2\" (UID: \"433f58f3-2b64-4ade-a6d2-2016a24672b3\") " pod="openshift-dns/node-resolver-82kk2" Apr 16 18:02:23.586315 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.586289 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-system-cni-dir\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.586315 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.586310 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-host-run-k8s-cni-cncf-io\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.587124 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.587101 2571 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 16 18:02:23.612179 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.612154 2571 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-vmvq7" Apr 16 18:02:23.619845 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.619785 2571 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-vmvq7" Apr 16 18:02:23.622939 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.622910 2571 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2028-04-15 17:57:22 +0000 UTC" deadline="2027-11-21 12:01:11.021990101 +0000 UTC" Apr 16 18:02:23.622939 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.622938 2571 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="14009h58m47.399055089s" Apr 16 18:02:23.676647 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.676625 2571 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 16 18:02:23.686717 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.686695 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-host-cni-bin\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.686830 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.686722 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/208509f5-55d0-46f5-aebb-6497d6c3fa14-host\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.686830 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.686739 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/208509f5-55d0-46f5-aebb-6497d6c3fa14-etc-tuned\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.686830 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.686753 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-multus-cni-dir\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.686830 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.686768 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-host-var-lib-cni-multus\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.686830 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.686785 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-hostroot\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.686830 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.686824 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-hostroot\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.686830 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.686822 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/208509f5-55d0-46f5-aebb-6497d6c3fa14-host\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.687226 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.686826 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-host-cni-bin\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.687226 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.686850 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-host-slash\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.687226 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.686853 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-host-var-lib-cni-multus\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.687226 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.686862 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-multus-cni-dir\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.687226 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.686900 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-ovn-node-metrics-cert\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.687226 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.686910 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-host-slash\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.687226 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.686921 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-host-var-lib-cni-bin\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.687226 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.686961 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.687226 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.686968 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-host-var-lib-cni-bin\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.687226 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.686994 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/529cdc35-2ba8-48a7-8e7c-fefbb7c00f18-cni-binary-copy\") pod \"multus-additional-cni-plugins-8cd6l\" (UID: \"529cdc35-2ba8-48a7-8e7c-fefbb7c00f18\") " pod="openshift-multus/multus-additional-cni-plugins-8cd6l" Apr 16 18:02:23.687226 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687016 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.687226 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687020 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/208509f5-55d0-46f5-aebb-6497d6c3fa14-etc-modprobe-d\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.687226 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687058 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/208509f5-55d0-46f5-aebb-6497d6c3fa14-var-lib-kubelet\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.687226 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687076 2571 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Apr 16 18:02:23.687226 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687085 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-host-run-netns\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.687226 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687105 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/208509f5-55d0-46f5-aebb-6497d6c3fa14-etc-modprobe-d\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.687226 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687113 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a8e6ff0a-fab0-4399-9c0e-356c833b2958-socket-dir\") pod \"aws-ebs-csi-driver-node-fprmr\" (UID: \"a8e6ff0a-fab0-4399-9c0e-356c833b2958\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-fprmr" Apr 16 18:02:23.688062 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687139 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/a8e6ff0a-fab0-4399-9c0e-356c833b2958-etc-selinux\") pod \"aws-ebs-csi-driver-node-fprmr\" (UID: \"a8e6ff0a-fab0-4399-9c0e-356c833b2958\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-fprmr" Apr 16 18:02:23.688062 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687148 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-host-run-netns\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.688062 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687151 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/208509f5-55d0-46f5-aebb-6497d6c3fa14-var-lib-kubelet\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.688062 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687166 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-log-socket\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.688062 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687191 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-host-cni-netd\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.688062 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687217 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/208509f5-55d0-46f5-aebb-6497d6c3fa14-run\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.688062 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687218 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-log-socket\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.688062 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687242 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fzf6h\" (UniqueName: \"kubernetes.io/projected/28103df6-de37-4b7f-b3e8-6ef03a0d1cfe-kube-api-access-fzf6h\") pod \"network-metrics-daemon-tv8pg\" (UID: \"28103df6-de37-4b7f-b3e8-6ef03a0d1cfe\") " pod="openshift-multus/network-metrics-daemon-tv8pg" Apr 16 18:02:23.688062 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687264 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-host-cni-netd\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.688062 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687276 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/208509f5-55d0-46f5-aebb-6497d6c3fa14-run\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.688062 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687266 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/443d2e7a-08b9-4fa3-b1de-3c569b5764fd-serviceca\") pod \"node-ca-5pm7k\" (UID: \"443d2e7a-08b9-4fa3-b1de-3c569b5764fd\") " pod="openshift-image-registry/node-ca-5pm7k" Apr 16 18:02:23.688062 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687315 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-etc-openvswitch\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.688062 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687342 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-ovnkube-script-lib\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.688062 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687367 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r65t2\" (UniqueName: \"kubernetes.io/projected/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-kube-api-access-r65t2\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.688062 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687389 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/208509f5-55d0-46f5-aebb-6497d6c3fa14-etc-sysconfig\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.688062 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687411 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-cnibin\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.688062 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687414 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-etc-openvswitch\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.688062 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687434 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/443d2e7a-08b9-4fa3-b1de-3c569b5764fd-host\") pod \"node-ca-5pm7k\" (UID: \"443d2e7a-08b9-4fa3-b1de-3c569b5764fd\") " pod="openshift-image-registry/node-ca-5pm7k" Apr 16 18:02:23.689019 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687460 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/529cdc35-2ba8-48a7-8e7c-fefbb7c00f18-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-8cd6l\" (UID: \"529cdc35-2ba8-48a7-8e7c-fefbb7c00f18\") " pod="openshift-multus/multus-additional-cni-plugins-8cd6l" Apr 16 18:02:23.689019 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687485 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-run-openvswitch\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.689019 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687510 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/208509f5-55d0-46f5-aebb-6497d6c3fa14-etc-kubernetes\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.689019 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687539 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/208509f5-55d0-46f5-aebb-6497d6c3fa14-etc-sysconfig\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.689019 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687553 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-host-var-lib-kubelet\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.689019 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687609 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-cnibin\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.689019 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687621 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fhgs8\" (UniqueName: \"kubernetes.io/projected/dfe6a446-b0d0-4f14-ab3f-bc468e461320-kube-api-access-fhgs8\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.689019 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687622 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/529cdc35-2ba8-48a7-8e7c-fefbb7c00f18-cni-binary-copy\") pod \"multus-additional-cni-plugins-8cd6l\" (UID: \"529cdc35-2ba8-48a7-8e7c-fefbb7c00f18\") " pod="openshift-multus/multus-additional-cni-plugins-8cd6l" Apr 16 18:02:23.689019 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687652 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-run-openvswitch\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.689019 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687716 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/443d2e7a-08b9-4fa3-b1de-3c569b5764fd-host\") pod \"node-ca-5pm7k\" (UID: \"443d2e7a-08b9-4fa3-b1de-3c569b5764fd\") " pod="openshift-image-registry/node-ca-5pm7k" Apr 16 18:02:23.689019 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687739 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/443d2e7a-08b9-4fa3-b1de-3c569b5764fd-serviceca\") pod \"node-ca-5pm7k\" (UID: \"443d2e7a-08b9-4fa3-b1de-3c569b5764fd\") " pod="openshift-image-registry/node-ca-5pm7k" Apr 16 18:02:23.689019 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687767 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/208509f5-55d0-46f5-aebb-6497d6c3fa14-etc-kubernetes\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.689019 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687805 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-host-var-lib-kubelet\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.689019 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687828 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/a8e6ff0a-fab0-4399-9c0e-356c833b2958-sys-fs\") pod \"aws-ebs-csi-driver-node-fprmr\" (UID: \"a8e6ff0a-fab0-4399-9c0e-356c833b2958\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-fprmr" Apr 16 18:02:23.689019 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687869 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-host-kubelet\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.689019 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687896 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/208509f5-55d0-46f5-aebb-6497d6c3fa14-lib-modules\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.689019 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687941 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4txqv\" (UniqueName: \"kubernetes.io/projected/208509f5-55d0-46f5-aebb-6497d6c3fa14-kube-api-access-4txqv\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.689019 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687951 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-host-kubelet\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.689883 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687968 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d7d6t\" (UniqueName: \"kubernetes.io/projected/96576415-5af9-4a8f-a718-72470bf1a7d9-kube-api-access-d7d6t\") pod \"iptables-alerter-jw52k\" (UID: \"96576415-5af9-4a8f-a718-72470bf1a7d9\") " pod="openshift-network-operator/iptables-alerter-jw52k" Apr 16 18:02:23.689883 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.687990 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-ovnkube-script-lib\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.689883 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688012 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/a0c27f7e-2a1f-463d-bcfa-0c3f56beb85f-agent-certs\") pod \"konnectivity-agent-9s2wr\" (UID: \"a0c27f7e-2a1f-463d-bcfa-0c3f56beb85f\") " pod="kube-system/konnectivity-agent-9s2wr" Apr 16 18:02:23.689883 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688040 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gp8cb\" (UniqueName: \"kubernetes.io/projected/433f58f3-2b64-4ade-a6d2-2016a24672b3-kube-api-access-gp8cb\") pod \"node-resolver-82kk2\" (UID: \"433f58f3-2b64-4ade-a6d2-2016a24672b3\") " pod="openshift-dns/node-resolver-82kk2" Apr 16 18:02:23.689883 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688066 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-system-cni-dir\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.689883 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688073 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/208509f5-55d0-46f5-aebb-6497d6c3fa14-lib-modules\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.689883 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688091 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-host-run-k8s-cni-cncf-io\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.689883 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688118 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dfe6a446-b0d0-4f14-ab3f-bc468e461320-cni-binary-copy\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.689883 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688132 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/529cdc35-2ba8-48a7-8e7c-fefbb7c00f18-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-8cd6l\" (UID: \"529cdc35-2ba8-48a7-8e7c-fefbb7c00f18\") " pod="openshift-multus/multus-additional-cni-plugins-8cd6l" Apr 16 18:02:23.689883 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688141 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-etc-kubernetes\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.689883 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688174 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/433f58f3-2b64-4ade-a6d2-2016a24672b3-hosts-file\") pod \"node-resolver-82kk2\" (UID: \"433f58f3-2b64-4ade-a6d2-2016a24672b3\") " pod="openshift-dns/node-resolver-82kk2" Apr 16 18:02:23.689883 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688189 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-system-cni-dir\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.689883 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688200 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-env-overrides\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.689883 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688254 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-etc-kubernetes\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.689883 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688272 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-host-run-k8s-cni-cncf-io\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.689883 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688311 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/433f58f3-2b64-4ade-a6d2-2016a24672b3-hosts-file\") pod \"node-resolver-82kk2\" (UID: \"433f58f3-2b64-4ade-a6d2-2016a24672b3\") " pod="openshift-dns/node-resolver-82kk2" Apr 16 18:02:23.689883 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688312 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/a0c27f7e-2a1f-463d-bcfa-0c3f56beb85f-konnectivity-ca\") pod \"konnectivity-agent-9s2wr\" (UID: \"a0c27f7e-2a1f-463d-bcfa-0c3f56beb85f\") " pod="kube-system/konnectivity-agent-9s2wr" Apr 16 18:02:23.689883 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688348 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z9rvc\" (UniqueName: \"kubernetes.io/projected/3d6007c8-6817-406b-894d-8f5fefd81911-kube-api-access-z9rvc\") pod \"network-check-target-9b2bn\" (UID: \"3d6007c8-6817-406b-894d-8f5fefd81911\") " pod="openshift-network-diagnostics/network-check-target-9b2bn" Apr 16 18:02:23.690706 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688380 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/a8e6ff0a-fab0-4399-9c0e-356c833b2958-device-dir\") pod \"aws-ebs-csi-driver-node-fprmr\" (UID: \"a8e6ff0a-fab0-4399-9c0e-356c833b2958\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-fprmr" Apr 16 18:02:23.690706 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688407 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-ovnkube-config\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.690706 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688435 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/529cdc35-2ba8-48a7-8e7c-fefbb7c00f18-cnibin\") pod \"multus-additional-cni-plugins-8cd6l\" (UID: \"529cdc35-2ba8-48a7-8e7c-fefbb7c00f18\") " pod="openshift-multus/multus-additional-cni-plugins-8cd6l" Apr 16 18:02:23.690706 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688460 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/529cdc35-2ba8-48a7-8e7c-fefbb7c00f18-os-release\") pod \"multus-additional-cni-plugins-8cd6l\" (UID: \"529cdc35-2ba8-48a7-8e7c-fefbb7c00f18\") " pod="openshift-multus/multus-additional-cni-plugins-8cd6l" Apr 16 18:02:23.690706 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688489 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-node-log\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.690706 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688514 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/208509f5-55d0-46f5-aebb-6497d6c3fa14-etc-systemd\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.690706 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688538 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/208509f5-55d0-46f5-aebb-6497d6c3fa14-sys\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.690706 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688639 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/529cdc35-2ba8-48a7-8e7c-fefbb7c00f18-os-release\") pod \"multus-additional-cni-plugins-8cd6l\" (UID: \"529cdc35-2ba8-48a7-8e7c-fefbb7c00f18\") " pod="openshift-multus/multus-additional-cni-plugins-8cd6l" Apr 16 18:02:23.690706 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688563 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/96576415-5af9-4a8f-a718-72470bf1a7d9-iptables-alerter-script\") pod \"iptables-alerter-jw52k\" (UID: \"96576415-5af9-4a8f-a718-72470bf1a7d9\") " pod="openshift-network-operator/iptables-alerter-jw52k" Apr 16 18:02:23.690706 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688707 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dfe6a446-b0d0-4f14-ab3f-bc468e461320-cni-binary-copy\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.690706 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688715 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8e6ff0a-fab0-4399-9c0e-356c833b2958-kubelet-dir\") pod \"aws-ebs-csi-driver-node-fprmr\" (UID: \"a8e6ff0a-fab0-4399-9c0e-356c833b2958\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-fprmr" Apr 16 18:02:23.690706 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688774 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gntgq\" (UniqueName: \"kubernetes.io/projected/529cdc35-2ba8-48a7-8e7c-fefbb7c00f18-kube-api-access-gntgq\") pod \"multus-additional-cni-plugins-8cd6l\" (UID: \"529cdc35-2ba8-48a7-8e7c-fefbb7c00f18\") " pod="openshift-multus/multus-additional-cni-plugins-8cd6l" Apr 16 18:02:23.690706 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688802 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-run-systemd\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.690706 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688844 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-var-lib-openvswitch\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.690706 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688846 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/a0c27f7e-2a1f-463d-bcfa-0c3f56beb85f-konnectivity-ca\") pod \"konnectivity-agent-9s2wr\" (UID: \"a0c27f7e-2a1f-463d-bcfa-0c3f56beb85f\") " pod="kube-system/konnectivity-agent-9s2wr" Apr 16 18:02:23.690706 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688871 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/208509f5-55d0-46f5-aebb-6497d6c3fa14-etc-sysctl-conf\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.690706 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688900 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d2zgg\" (UniqueName: \"kubernetes.io/projected/443d2e7a-08b9-4fa3-b1de-3c569b5764fd-kube-api-access-d2zgg\") pod \"node-ca-5pm7k\" (UID: \"443d2e7a-08b9-4fa3-b1de-3c569b5764fd\") " pod="openshift-image-registry/node-ca-5pm7k" Apr 16 18:02:23.691177 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688912 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/529cdc35-2ba8-48a7-8e7c-fefbb7c00f18-cnibin\") pod \"multus-additional-cni-plugins-8cd6l\" (UID: \"529cdc35-2ba8-48a7-8e7c-fefbb7c00f18\") " pod="openshift-multus/multus-additional-cni-plugins-8cd6l" Apr 16 18:02:23.691177 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688930 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a8e6ff0a-fab0-4399-9c0e-356c833b2958-registration-dir\") pod \"aws-ebs-csi-driver-node-fprmr\" (UID: \"a8e6ff0a-fab0-4399-9c0e-356c833b2958\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-fprmr" Apr 16 18:02:23.691177 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688970 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/208509f5-55d0-46f5-aebb-6497d6c3fa14-etc-systemd\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.691177 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.688971 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/529cdc35-2ba8-48a7-8e7c-fefbb7c00f18-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8cd6l\" (UID: \"529cdc35-2ba8-48a7-8e7c-fefbb7c00f18\") " pod="openshift-multus/multus-additional-cni-plugins-8cd6l" Apr 16 18:02:23.691177 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.689009 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/208509f5-55d0-46f5-aebb-6497d6c3fa14-etc-sysctl-d\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.691177 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.689036 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/96576415-5af9-4a8f-a718-72470bf1a7d9-host-slash\") pod \"iptables-alerter-jw52k\" (UID: \"96576415-5af9-4a8f-a718-72470bf1a7d9\") " pod="openshift-network-operator/iptables-alerter-jw52k" Apr 16 18:02:23.691177 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.689060 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/433f58f3-2b64-4ade-a6d2-2016a24672b3-tmp-dir\") pod \"node-resolver-82kk2\" (UID: \"433f58f3-2b64-4ade-a6d2-2016a24672b3\") " pod="openshift-dns/node-resolver-82kk2" Apr 16 18:02:23.691177 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.689082 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/208509f5-55d0-46f5-aebb-6497d6c3fa14-tmp\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.691177 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.689104 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/dfe6a446-b0d0-4f14-ab3f-bc468e461320-multus-daemon-config\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.691177 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.689126 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-host-run-multus-certs\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.691177 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.689147 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/529cdc35-2ba8-48a7-8e7c-fefbb7c00f18-system-cni-dir\") pod \"multus-additional-cni-plugins-8cd6l\" (UID: \"529cdc35-2ba8-48a7-8e7c-fefbb7c00f18\") " pod="openshift-multus/multus-additional-cni-plugins-8cd6l" Apr 16 18:02:23.691177 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.689168 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-run-ovn\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.691177 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.689189 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-host-run-ovn-kubernetes\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.691177 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.689206 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-ovnkube-config\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.691177 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.689265 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-node-log\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.691177 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.689213 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-os-release\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.691177 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.689268 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/96576415-5af9-4a8f-a718-72470bf1a7d9-host-slash\") pod \"iptables-alerter-jw52k\" (UID: \"96576415-5af9-4a8f-a718-72470bf1a7d9\") " pod="openshift-network-operator/iptables-alerter-jw52k" Apr 16 18:02:23.691676 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.689303 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/208509f5-55d0-46f5-aebb-6497d6c3fa14-etc-sysctl-d\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.691676 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.689311 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-multus-conf-dir\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.691676 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.689349 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-systemd-units\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.691676 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.689378 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28103df6-de37-4b7f-b3e8-6ef03a0d1cfe-metrics-certs\") pod \"network-metrics-daemon-tv8pg\" (UID: \"28103df6-de37-4b7f-b3e8-6ef03a0d1cfe\") " pod="openshift-multus/network-metrics-daemon-tv8pg" Apr 16 18:02:23.691676 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.689404 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-multus-socket-dir-parent\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.691676 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.689439 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd86z\" (UniqueName: \"kubernetes.io/projected/a8e6ff0a-fab0-4399-9c0e-356c833b2958-kube-api-access-fd86z\") pod \"aws-ebs-csi-driver-node-fprmr\" (UID: \"a8e6ff0a-fab0-4399-9c0e-356c833b2958\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-fprmr" Apr 16 18:02:23.691676 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.689474 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/529cdc35-2ba8-48a7-8e7c-fefbb7c00f18-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8cd6l\" (UID: \"529cdc35-2ba8-48a7-8e7c-fefbb7c00f18\") " pod="openshift-multus/multus-additional-cni-plugins-8cd6l" Apr 16 18:02:23.691676 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.689523 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-host-run-netns\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.691676 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.689615 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-host-run-netns\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.691676 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.689704 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/529cdc35-2ba8-48a7-8e7c-fefbb7c00f18-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8cd6l\" (UID: \"529cdc35-2ba8-48a7-8e7c-fefbb7c00f18\") " pod="openshift-multus/multus-additional-cni-plugins-8cd6l" Apr 16 18:02:23.691676 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.689804 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/dfe6a446-b0d0-4f14-ab3f-bc468e461320-multus-daemon-config\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.691676 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.689860 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-host-run-multus-certs\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.691676 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.689863 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/433f58f3-2b64-4ade-a6d2-2016a24672b3-tmp-dir\") pod \"node-resolver-82kk2\" (UID: \"433f58f3-2b64-4ade-a6d2-2016a24672b3\") " pod="openshift-dns/node-resolver-82kk2" Apr 16 18:02:23.691676 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.689910 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/529cdc35-2ba8-48a7-8e7c-fefbb7c00f18-system-cni-dir\") pod \"multus-additional-cni-plugins-8cd6l\" (UID: \"529cdc35-2ba8-48a7-8e7c-fefbb7c00f18\") " pod="openshift-multus/multus-additional-cni-plugins-8cd6l" Apr 16 18:02:23.691676 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.689954 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-run-ovn\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.691676 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:23.689987 2571 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:02:23.691676 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.689992 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-host-run-ovn-kubernetes\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.691676 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.689210 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/96576415-5af9-4a8f-a718-72470bf1a7d9-iptables-alerter-script\") pod \"iptables-alerter-jw52k\" (UID: \"96576415-5af9-4a8f-a718-72470bf1a7d9\") " pod="openshift-network-operator/iptables-alerter-jw52k" Apr 16 18:02:23.692181 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.690033 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-env-overrides\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.692181 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:23.690046 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28103df6-de37-4b7f-b3e8-6ef03a0d1cfe-metrics-certs podName:28103df6-de37-4b7f-b3e8-6ef03a0d1cfe nodeName:}" failed. No retries permitted until 2026-04-16 18:02:24.190028172 +0000 UTC m=+2.018991287 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/28103df6-de37-4b7f-b3e8-6ef03a0d1cfe-metrics-certs") pod "network-metrics-daemon-tv8pg" (UID: "28103df6-de37-4b7f-b3e8-6ef03a0d1cfe") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:02:23.692181 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.690097 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/208509f5-55d0-46f5-aebb-6497d6c3fa14-sys\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.692181 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.690275 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-multus-conf-dir\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.692181 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.690298 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/208509f5-55d0-46f5-aebb-6497d6c3fa14-etc-sysctl-conf\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.692181 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.690310 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-systemd-units\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.692181 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.690340 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-run-systemd\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.692181 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.690355 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-multus-socket-dir-parent\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.692181 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.690392 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dfe6a446-b0d0-4f14-ab3f-bc468e461320-os-release\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.692181 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.690438 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/529cdc35-2ba8-48a7-8e7c-fefbb7c00f18-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8cd6l\" (UID: \"529cdc35-2ba8-48a7-8e7c-fefbb7c00f18\") " pod="openshift-multus/multus-additional-cni-plugins-8cd6l" Apr 16 18:02:23.692181 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.690390 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-var-lib-openvswitch\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.692181 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.690517 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-ovn-node-metrics-cert\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.692181 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.690652 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/a0c27f7e-2a1f-463d-bcfa-0c3f56beb85f-agent-certs\") pod \"konnectivity-agent-9s2wr\" (UID: \"a0c27f7e-2a1f-463d-bcfa-0c3f56beb85f\") " pod="kube-system/konnectivity-agent-9s2wr" Apr 16 18:02:23.692181 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.690777 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/208509f5-55d0-46f5-aebb-6497d6c3fa14-etc-tuned\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.692181 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.691104 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/208509f5-55d0-46f5-aebb-6497d6c3fa14-tmp\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.693192 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:23.693164 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7cc70a393babc6302c6cd1539b79c3b.slice/crio-fc2b7ce581b01858a2e89c0c0ced0cd0a8372681b3bc348a70f128a2b1868eba WatchSource:0}: Error finding container fc2b7ce581b01858a2e89c0c0ced0cd0a8372681b3bc348a70f128a2b1868eba: Status 404 returned error can't find the container with id fc2b7ce581b01858a2e89c0c0ced0cd0a8372681b3bc348a70f128a2b1868eba Apr 16 18:02:23.693406 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:23.693390 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb23767064b3798a2c7472b4227e16d3a.slice/crio-8d703cca5f99bd23d5371000d1b3faeab2956d780adc21dc3d2721a37ce19188 WatchSource:0}: Error finding container 8d703cca5f99bd23d5371000d1b3faeab2956d780adc21dc3d2721a37ce19188: Status 404 returned error can't find the container with id 8d703cca5f99bd23d5371000d1b3faeab2956d780adc21dc3d2721a37ce19188 Apr 16 18:02:23.696002 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.695979 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzf6h\" (UniqueName: \"kubernetes.io/projected/28103df6-de37-4b7f-b3e8-6ef03a0d1cfe-kube-api-access-fzf6h\") pod \"network-metrics-daemon-tv8pg\" (UID: \"28103df6-de37-4b7f-b3e8-6ef03a0d1cfe\") " pod="openshift-multus/network-metrics-daemon-tv8pg" Apr 16 18:02:23.697868 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.697853 2571 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 18:02:23.699631 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:23.699003 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 18:02:23.699631 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:23.699024 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 18:02:23.699631 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:23.699037 2571 projected.go:194] Error preparing data for projected volume kube-api-access-z9rvc for pod openshift-network-diagnostics/network-check-target-9b2bn: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:02:23.699631 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:23.699090 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3d6007c8-6817-406b-894d-8f5fefd81911-kube-api-access-z9rvc podName:3d6007c8-6817-406b-894d-8f5fefd81911 nodeName:}" failed. No retries permitted until 2026-04-16 18:02:24.199072611 +0000 UTC m=+2.028035739 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-z9rvc" (UniqueName: "kubernetes.io/projected/3d6007c8-6817-406b-894d-8f5fefd81911-kube-api-access-z9rvc") pod "network-check-target-9b2bn" (UID: "3d6007c8-6817-406b-894d-8f5fefd81911") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:02:23.701271 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.701033 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r65t2\" (UniqueName: \"kubernetes.io/projected/bf2455ff-3f2f-4b0a-9d79-994f43be7b2f-kube-api-access-r65t2\") pod \"ovnkube-node-tlgs4\" (UID: \"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.701458 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.701408 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp8cb\" (UniqueName: \"kubernetes.io/projected/433f58f3-2b64-4ade-a6d2-2016a24672b3-kube-api-access-gp8cb\") pod \"node-resolver-82kk2\" (UID: \"433f58f3-2b64-4ade-a6d2-2016a24672b3\") " pod="openshift-dns/node-resolver-82kk2" Apr 16 18:02:23.701841 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.701816 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhgs8\" (UniqueName: \"kubernetes.io/projected/dfe6a446-b0d0-4f14-ab3f-bc468e461320-kube-api-access-fhgs8\") pod \"multus-dztd6\" (UID: \"dfe6a446-b0d0-4f14-ab3f-bc468e461320\") " pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.703017 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.702982 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4txqv\" (UniqueName: \"kubernetes.io/projected/208509f5-55d0-46f5-aebb-6497d6c3fa14-kube-api-access-4txqv\") pod \"tuned-sqf5x\" (UID: \"208509f5-55d0-46f5-aebb-6497d6c3fa14\") " pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.703629 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.703075 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2zgg\" (UniqueName: \"kubernetes.io/projected/443d2e7a-08b9-4fa3-b1de-3c569b5764fd-kube-api-access-d2zgg\") pod \"node-ca-5pm7k\" (UID: \"443d2e7a-08b9-4fa3-b1de-3c569b5764fd\") " pod="openshift-image-registry/node-ca-5pm7k" Apr 16 18:02:23.704538 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.704001 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gntgq\" (UniqueName: \"kubernetes.io/projected/529cdc35-2ba8-48a7-8e7c-fefbb7c00f18-kube-api-access-gntgq\") pod \"multus-additional-cni-plugins-8cd6l\" (UID: \"529cdc35-2ba8-48a7-8e7c-fefbb7c00f18\") " pod="openshift-multus/multus-additional-cni-plugins-8cd6l" Apr 16 18:02:23.705160 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.705146 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-9s2wr" Apr 16 18:02:23.705659 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.705636 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7d6t\" (UniqueName: \"kubernetes.io/projected/96576415-5af9-4a8f-a718-72470bf1a7d9-kube-api-access-d7d6t\") pod \"iptables-alerter-jw52k\" (UID: \"96576415-5af9-4a8f-a718-72470bf1a7d9\") " pod="openshift-network-operator/iptables-alerter-jw52k" Apr 16 18:02:23.711980 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:23.711956 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0c27f7e_2a1f_463d_bcfa_0c3f56beb85f.slice/crio-a3e1abf6fb31a587e9a98f1074e6866d38121c9724dcf8bae579f8a7e0c0af36 WatchSource:0}: Error finding container a3e1abf6fb31a587e9a98f1074e6866d38121c9724dcf8bae579f8a7e0c0af36: Status 404 returned error can't find the container with id a3e1abf6fb31a587e9a98f1074e6866d38121c9724dcf8bae579f8a7e0c0af36 Apr 16 18:02:23.713314 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.713243 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-128-59.ec2.internal" event={"ID":"d7cc70a393babc6302c6cd1539b79c3b","Type":"ContainerStarted","Data":"fc2b7ce581b01858a2e89c0c0ced0cd0a8372681b3bc348a70f128a2b1868eba"} Apr 16 18:02:23.714428 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.714411 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-128-59.ec2.internal" event={"ID":"b23767064b3798a2c7472b4227e16d3a","Type":"ContainerStarted","Data":"8d703cca5f99bd23d5371000d1b3faeab2956d780adc21dc3d2721a37ce19188"} Apr 16 18:02:23.790027 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.790003 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/a8e6ff0a-fab0-4399-9c0e-356c833b2958-device-dir\") pod \"aws-ebs-csi-driver-node-fprmr\" (UID: \"a8e6ff0a-fab0-4399-9c0e-356c833b2958\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-fprmr" Apr 16 18:02:23.790027 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.790034 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8e6ff0a-fab0-4399-9c0e-356c833b2958-kubelet-dir\") pod \"aws-ebs-csi-driver-node-fprmr\" (UID: \"a8e6ff0a-fab0-4399-9c0e-356c833b2958\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-fprmr" Apr 16 18:02:23.790211 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.790053 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a8e6ff0a-fab0-4399-9c0e-356c833b2958-registration-dir\") pod \"aws-ebs-csi-driver-node-fprmr\" (UID: \"a8e6ff0a-fab0-4399-9c0e-356c833b2958\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-fprmr" Apr 16 18:02:23.790211 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.790083 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fd86z\" (UniqueName: \"kubernetes.io/projected/a8e6ff0a-fab0-4399-9c0e-356c833b2958-kube-api-access-fd86z\") pod \"aws-ebs-csi-driver-node-fprmr\" (UID: \"a8e6ff0a-fab0-4399-9c0e-356c833b2958\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-fprmr" Apr 16 18:02:23.790211 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.790126 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8e6ff0a-fab0-4399-9c0e-356c833b2958-kubelet-dir\") pod \"aws-ebs-csi-driver-node-fprmr\" (UID: \"a8e6ff0a-fab0-4399-9c0e-356c833b2958\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-fprmr" Apr 16 18:02:23.790211 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.790148 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a8e6ff0a-fab0-4399-9c0e-356c833b2958-registration-dir\") pod \"aws-ebs-csi-driver-node-fprmr\" (UID: \"a8e6ff0a-fab0-4399-9c0e-356c833b2958\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-fprmr" Apr 16 18:02:23.790211 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.790126 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/a8e6ff0a-fab0-4399-9c0e-356c833b2958-device-dir\") pod \"aws-ebs-csi-driver-node-fprmr\" (UID: \"a8e6ff0a-fab0-4399-9c0e-356c833b2958\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-fprmr" Apr 16 18:02:23.790211 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.790196 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a8e6ff0a-fab0-4399-9c0e-356c833b2958-socket-dir\") pod \"aws-ebs-csi-driver-node-fprmr\" (UID: \"a8e6ff0a-fab0-4399-9c0e-356c833b2958\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-fprmr" Apr 16 18:02:23.790404 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.790227 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/a8e6ff0a-fab0-4399-9c0e-356c833b2958-etc-selinux\") pod \"aws-ebs-csi-driver-node-fprmr\" (UID: \"a8e6ff0a-fab0-4399-9c0e-356c833b2958\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-fprmr" Apr 16 18:02:23.790404 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.790262 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/a8e6ff0a-fab0-4399-9c0e-356c833b2958-sys-fs\") pod \"aws-ebs-csi-driver-node-fprmr\" (UID: \"a8e6ff0a-fab0-4399-9c0e-356c833b2958\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-fprmr" Apr 16 18:02:23.790404 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.790318 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a8e6ff0a-fab0-4399-9c0e-356c833b2958-socket-dir\") pod \"aws-ebs-csi-driver-node-fprmr\" (UID: \"a8e6ff0a-fab0-4399-9c0e-356c833b2958\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-fprmr" Apr 16 18:02:23.790404 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.790351 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/a8e6ff0a-fab0-4399-9c0e-356c833b2958-etc-selinux\") pod \"aws-ebs-csi-driver-node-fprmr\" (UID: \"a8e6ff0a-fab0-4399-9c0e-356c833b2958\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-fprmr" Apr 16 18:02:23.790404 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.790365 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/a8e6ff0a-fab0-4399-9c0e-356c833b2958-sys-fs\") pod \"aws-ebs-csi-driver-node-fprmr\" (UID: \"a8e6ff0a-fab0-4399-9c0e-356c833b2958\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-fprmr" Apr 16 18:02:23.798673 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.798655 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd86z\" (UniqueName: \"kubernetes.io/projected/a8e6ff0a-fab0-4399-9c0e-356c833b2958-kube-api-access-fd86z\") pod \"aws-ebs-csi-driver-node-fprmr\" (UID: \"a8e6ff0a-fab0-4399-9c0e-356c833b2958\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-fprmr" Apr 16 18:02:23.886279 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.886206 2571 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 18:02:23.887229 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.887216 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-82kk2" Apr 16 18:02:23.892966 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:23.892942 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod433f58f3_2b64_4ade_a6d2_2016a24672b3.slice/crio-be9c3c4ac91d4bfd7bb2cbbd111f1187a1462203692025a8fee9bd538ebc43fc WatchSource:0}: Error finding container be9c3c4ac91d4bfd7bb2cbbd111f1187a1462203692025a8fee9bd538ebc43fc: Status 404 returned error can't find the container with id be9c3c4ac91d4bfd7bb2cbbd111f1187a1462203692025a8fee9bd538ebc43fc Apr 16 18:02:23.898816 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.898799 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-8cd6l" Apr 16 18:02:23.904685 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:23.904662 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod529cdc35_2ba8_48a7_8e7c_fefbb7c00f18.slice/crio-331fd468e2b5a7788c9fdcad77591cbc3b1861c65075c72481c3f7420bf3eeab WatchSource:0}: Error finding container 331fd468e2b5a7788c9fdcad77591cbc3b1861c65075c72481c3f7420bf3eeab: Status 404 returned error can't find the container with id 331fd468e2b5a7788c9fdcad77591cbc3b1861c65075c72481c3f7420bf3eeab Apr 16 18:02:23.912583 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.912563 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-dztd6" Apr 16 18:02:23.919372 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:23.919350 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfe6a446_b0d0_4f14_ab3f_bc468e461320.slice/crio-bdda963325031f2c350c6b33b64021a29182835875243ba3f398d1f2d5af7cfe WatchSource:0}: Error finding container bdda963325031f2c350c6b33b64021a29182835875243ba3f398d1f2d5af7cfe: Status 404 returned error can't find the container with id bdda963325031f2c350c6b33b64021a29182835875243ba3f398d1f2d5af7cfe Apr 16 18:02:23.925168 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.925149 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:23.930509 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:23.930488 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf2455ff_3f2f_4b0a_9d79_994f43be7b2f.slice/crio-910add459e9f47596f35789a66dc7d16b78cb6ebaf8153dd35f6cc56ded9ef90 WatchSource:0}: Error finding container 910add459e9f47596f35789a66dc7d16b78cb6ebaf8153dd35f6cc56ded9ef90: Status 404 returned error can't find the container with id 910add459e9f47596f35789a66dc7d16b78cb6ebaf8153dd35f6cc56ded9ef90 Apr 16 18:02:23.938771 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.938754 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" Apr 16 18:02:23.943820 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:23.943800 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod208509f5_55d0_46f5_aebb_6497d6c3fa14.slice/crio-fbd43cfbee5d3d3354022c3a781cff4aade1ca2c2665616541c477bea3fe9f60 WatchSource:0}: Error finding container fbd43cfbee5d3d3354022c3a781cff4aade1ca2c2665616541c477bea3fe9f60: Status 404 returned error can't find the container with id fbd43cfbee5d3d3354022c3a781cff4aade1ca2c2665616541c477bea3fe9f60 Apr 16 18:02:23.952782 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.952765 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-5pm7k" Apr 16 18:02:23.957954 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:23.957928 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod443d2e7a_08b9_4fa3_b1de_3c569b5764fd.slice/crio-88bf9f01d08396cd968a3b92a2dcc724c8bca58f14bb8d4f479487bee6d69623 WatchSource:0}: Error finding container 88bf9f01d08396cd968a3b92a2dcc724c8bca58f14bb8d4f479487bee6d69623: Status 404 returned error can't find the container with id 88bf9f01d08396cd968a3b92a2dcc724c8bca58f14bb8d4f479487bee6d69623 Apr 16 18:02:23.971087 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:23.971061 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-jw52k" Apr 16 18:02:23.977244 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:23.977225 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96576415_5af9_4a8f_a718_72470bf1a7d9.slice/crio-a18953aac955baf53edcfa1a1ce9ff2e07e90a2586e5b773b3ebb3a046f1ba17 WatchSource:0}: Error finding container a18953aac955baf53edcfa1a1ce9ff2e07e90a2586e5b773b3ebb3a046f1ba17: Status 404 returned error can't find the container with id a18953aac955baf53edcfa1a1ce9ff2e07e90a2586e5b773b3ebb3a046f1ba17 Apr 16 18:02:24.010744 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:24.010716 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-fprmr" Apr 16 18:02:24.018213 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:02:24.018174 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8e6ff0a_fab0_4399_9c0e_356c833b2958.slice/crio-232df3d369ce3799261e2c2d0a647a0d12ef8e0d77733db4f4aec53fed35e5d4 WatchSource:0}: Error finding container 232df3d369ce3799261e2c2d0a647a0d12ef8e0d77733db4f4aec53fed35e5d4: Status 404 returned error can't find the container with id 232df3d369ce3799261e2c2d0a647a0d12ef8e0d77733db4f4aec53fed35e5d4 Apr 16 18:02:24.193649 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:24.193561 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28103df6-de37-4b7f-b3e8-6ef03a0d1cfe-metrics-certs\") pod \"network-metrics-daemon-tv8pg\" (UID: \"28103df6-de37-4b7f-b3e8-6ef03a0d1cfe\") " pod="openshift-multus/network-metrics-daemon-tv8pg" Apr 16 18:02:24.193809 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:24.193743 2571 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:02:24.193867 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:24.193813 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28103df6-de37-4b7f-b3e8-6ef03a0d1cfe-metrics-certs podName:28103df6-de37-4b7f-b3e8-6ef03a0d1cfe nodeName:}" failed. No retries permitted until 2026-04-16 18:02:25.193795298 +0000 UTC m=+3.022758413 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/28103df6-de37-4b7f-b3e8-6ef03a0d1cfe-metrics-certs") pod "network-metrics-daemon-tv8pg" (UID: "28103df6-de37-4b7f-b3e8-6ef03a0d1cfe") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:02:24.294692 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:24.294654 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z9rvc\" (UniqueName: \"kubernetes.io/projected/3d6007c8-6817-406b-894d-8f5fefd81911-kube-api-access-z9rvc\") pod \"network-check-target-9b2bn\" (UID: \"3d6007c8-6817-406b-894d-8f5fefd81911\") " pod="openshift-network-diagnostics/network-check-target-9b2bn" Apr 16 18:02:24.294847 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:24.294815 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 18:02:24.294847 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:24.294833 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 18:02:24.294847 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:24.294846 2571 projected.go:194] Error preparing data for projected volume kube-api-access-z9rvc for pod openshift-network-diagnostics/network-check-target-9b2bn: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:02:24.295011 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:24.294913 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3d6007c8-6817-406b-894d-8f5fefd81911-kube-api-access-z9rvc podName:3d6007c8-6817-406b-894d-8f5fefd81911 nodeName:}" failed. No retries permitted until 2026-04-16 18:02:25.294893461 +0000 UTC m=+3.123856590 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-z9rvc" (UniqueName: "kubernetes.io/projected/3d6007c8-6817-406b-894d-8f5fefd81911-kube-api-access-z9rvc") pod "network-check-target-9b2bn" (UID: "3d6007c8-6817-406b-894d-8f5fefd81911") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:02:24.621222 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:24.621099 2571 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-15 17:57:23 +0000 UTC" deadline="2027-11-03 17:00:42.92996535 +0000 UTC" Apr 16 18:02:24.621222 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:24.621134 2571 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="13582h58m18.308835469s" Apr 16 18:02:24.714232 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:24.714202 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv8pg" Apr 16 18:02:24.714403 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:24.714327 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv8pg" podUID="28103df6-de37-4b7f-b3e8-6ef03a0d1cfe" Apr 16 18:02:24.724785 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:24.724571 2571 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 18:02:24.725991 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:24.725901 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-9s2wr" event={"ID":"a0c27f7e-2a1f-463d-bcfa-0c3f56beb85f","Type":"ContainerStarted","Data":"a3e1abf6fb31a587e9a98f1074e6866d38121c9724dcf8bae579f8a7e0c0af36"} Apr 16 18:02:24.732333 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:24.732287 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-5pm7k" event={"ID":"443d2e7a-08b9-4fa3-b1de-3c569b5764fd","Type":"ContainerStarted","Data":"88bf9f01d08396cd968a3b92a2dcc724c8bca58f14bb8d4f479487bee6d69623"} Apr 16 18:02:24.742914 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:24.742886 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" event={"ID":"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f","Type":"ContainerStarted","Data":"910add459e9f47596f35789a66dc7d16b78cb6ebaf8153dd35f6cc56ded9ef90"} Apr 16 18:02:24.754275 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:24.754243 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8cd6l" event={"ID":"529cdc35-2ba8-48a7-8e7c-fefbb7c00f18","Type":"ContainerStarted","Data":"331fd468e2b5a7788c9fdcad77591cbc3b1861c65075c72481c3f7420bf3eeab"} Apr 16 18:02:24.762789 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:24.762753 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-82kk2" event={"ID":"433f58f3-2b64-4ade-a6d2-2016a24672b3","Type":"ContainerStarted","Data":"be9c3c4ac91d4bfd7bb2cbbd111f1187a1462203692025a8fee9bd538ebc43fc"} Apr 16 18:02:24.770762 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:24.770729 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-fprmr" event={"ID":"a8e6ff0a-fab0-4399-9c0e-356c833b2958","Type":"ContainerStarted","Data":"232df3d369ce3799261e2c2d0a647a0d12ef8e0d77733db4f4aec53fed35e5d4"} Apr 16 18:02:24.775365 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:24.775338 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-jw52k" event={"ID":"96576415-5af9-4a8f-a718-72470bf1a7d9","Type":"ContainerStarted","Data":"a18953aac955baf53edcfa1a1ce9ff2e07e90a2586e5b773b3ebb3a046f1ba17"} Apr 16 18:02:24.783158 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:24.783129 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" event={"ID":"208509f5-55d0-46f5-aebb-6497d6c3fa14","Type":"ContainerStarted","Data":"fbd43cfbee5d3d3354022c3a781cff4aade1ca2c2665616541c477bea3fe9f60"} Apr 16 18:02:24.801493 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:24.801456 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dztd6" event={"ID":"dfe6a446-b0d0-4f14-ab3f-bc468e461320","Type":"ContainerStarted","Data":"bdda963325031f2c350c6b33b64021a29182835875243ba3f398d1f2d5af7cfe"} Apr 16 18:02:25.201292 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:25.200743 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28103df6-de37-4b7f-b3e8-6ef03a0d1cfe-metrics-certs\") pod \"network-metrics-daemon-tv8pg\" (UID: \"28103df6-de37-4b7f-b3e8-6ef03a0d1cfe\") " pod="openshift-multus/network-metrics-daemon-tv8pg" Apr 16 18:02:25.201292 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:25.200899 2571 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:02:25.201292 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:25.200963 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28103df6-de37-4b7f-b3e8-6ef03a0d1cfe-metrics-certs podName:28103df6-de37-4b7f-b3e8-6ef03a0d1cfe nodeName:}" failed. No retries permitted until 2026-04-16 18:02:27.200945718 +0000 UTC m=+5.029908835 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/28103df6-de37-4b7f-b3e8-6ef03a0d1cfe-metrics-certs") pod "network-metrics-daemon-tv8pg" (UID: "28103df6-de37-4b7f-b3e8-6ef03a0d1cfe") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:02:25.301356 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:25.301320 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z9rvc\" (UniqueName: \"kubernetes.io/projected/3d6007c8-6817-406b-894d-8f5fefd81911-kube-api-access-z9rvc\") pod \"network-check-target-9b2bn\" (UID: \"3d6007c8-6817-406b-894d-8f5fefd81911\") " pod="openshift-network-diagnostics/network-check-target-9b2bn" Apr 16 18:02:25.301534 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:25.301516 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 18:02:25.301624 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:25.301542 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 18:02:25.301624 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:25.301555 2571 projected.go:194] Error preparing data for projected volume kube-api-access-z9rvc for pod openshift-network-diagnostics/network-check-target-9b2bn: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:02:25.301740 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:25.301632 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3d6007c8-6817-406b-894d-8f5fefd81911-kube-api-access-z9rvc podName:3d6007c8-6817-406b-894d-8f5fefd81911 nodeName:}" failed. No retries permitted until 2026-04-16 18:02:27.301613198 +0000 UTC m=+5.130576315 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-z9rvc" (UniqueName: "kubernetes.io/projected/3d6007c8-6817-406b-894d-8f5fefd81911-kube-api-access-z9rvc") pod "network-check-target-9b2bn" (UID: "3d6007c8-6817-406b-894d-8f5fefd81911") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:02:25.622223 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:25.622136 2571 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-15 17:57:23 +0000 UTC" deadline="2028-01-26 04:37:41.490614746 +0000 UTC" Apr 16 18:02:25.622223 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:25.622175 2571 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="15586h35m15.868444239s" Apr 16 18:02:25.624324 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:25.624301 2571 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 18:02:25.712175 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:25.712144 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9b2bn" Apr 16 18:02:25.712351 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:25.712264 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9b2bn" podUID="3d6007c8-6817-406b-894d-8f5fefd81911" Apr 16 18:02:26.710318 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:26.710266 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv8pg" Apr 16 18:02:26.710786 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:26.710406 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv8pg" podUID="28103df6-de37-4b7f-b3e8-6ef03a0d1cfe" Apr 16 18:02:27.217762 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:27.217142 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28103df6-de37-4b7f-b3e8-6ef03a0d1cfe-metrics-certs\") pod \"network-metrics-daemon-tv8pg\" (UID: \"28103df6-de37-4b7f-b3e8-6ef03a0d1cfe\") " pod="openshift-multus/network-metrics-daemon-tv8pg" Apr 16 18:02:27.217762 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:27.217314 2571 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:02:27.217762 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:27.217399 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28103df6-de37-4b7f-b3e8-6ef03a0d1cfe-metrics-certs podName:28103df6-de37-4b7f-b3e8-6ef03a0d1cfe nodeName:}" failed. No retries permitted until 2026-04-16 18:02:31.217361898 +0000 UTC m=+9.046325012 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/28103df6-de37-4b7f-b3e8-6ef03a0d1cfe-metrics-certs") pod "network-metrics-daemon-tv8pg" (UID: "28103df6-de37-4b7f-b3e8-6ef03a0d1cfe") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:02:27.318444 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:27.317791 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z9rvc\" (UniqueName: \"kubernetes.io/projected/3d6007c8-6817-406b-894d-8f5fefd81911-kube-api-access-z9rvc\") pod \"network-check-target-9b2bn\" (UID: \"3d6007c8-6817-406b-894d-8f5fefd81911\") " pod="openshift-network-diagnostics/network-check-target-9b2bn" Apr 16 18:02:27.318444 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:27.318001 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 18:02:27.318444 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:27.318020 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 18:02:27.318444 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:27.318033 2571 projected.go:194] Error preparing data for projected volume kube-api-access-z9rvc for pod openshift-network-diagnostics/network-check-target-9b2bn: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:02:27.318444 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:27.318092 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3d6007c8-6817-406b-894d-8f5fefd81911-kube-api-access-z9rvc podName:3d6007c8-6817-406b-894d-8f5fefd81911 nodeName:}" failed. No retries permitted until 2026-04-16 18:02:31.318073657 +0000 UTC m=+9.147036782 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-z9rvc" (UniqueName: "kubernetes.io/projected/3d6007c8-6817-406b-894d-8f5fefd81911-kube-api-access-z9rvc") pod "network-check-target-9b2bn" (UID: "3d6007c8-6817-406b-894d-8f5fefd81911") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:02:27.710208 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:27.710125 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9b2bn" Apr 16 18:02:27.710378 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:27.710260 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9b2bn" podUID="3d6007c8-6817-406b-894d-8f5fefd81911" Apr 16 18:02:28.711615 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:28.710862 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv8pg" Apr 16 18:02:28.711615 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:28.711021 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv8pg" podUID="28103df6-de37-4b7f-b3e8-6ef03a0d1cfe" Apr 16 18:02:29.710493 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:29.710453 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9b2bn" Apr 16 18:02:29.710714 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:29.710579 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9b2bn" podUID="3d6007c8-6817-406b-894d-8f5fefd81911" Apr 16 18:02:30.710618 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:30.710499 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv8pg" Apr 16 18:02:30.711178 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:30.711144 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv8pg" podUID="28103df6-de37-4b7f-b3e8-6ef03a0d1cfe" Apr 16 18:02:31.251340 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:31.251302 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28103df6-de37-4b7f-b3e8-6ef03a0d1cfe-metrics-certs\") pod \"network-metrics-daemon-tv8pg\" (UID: \"28103df6-de37-4b7f-b3e8-6ef03a0d1cfe\") " pod="openshift-multus/network-metrics-daemon-tv8pg" Apr 16 18:02:31.251511 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:31.251459 2571 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:02:31.251570 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:31.251521 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28103df6-de37-4b7f-b3e8-6ef03a0d1cfe-metrics-certs podName:28103df6-de37-4b7f-b3e8-6ef03a0d1cfe nodeName:}" failed. No retries permitted until 2026-04-16 18:02:39.251502336 +0000 UTC m=+17.080465449 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/28103df6-de37-4b7f-b3e8-6ef03a0d1cfe-metrics-certs") pod "network-metrics-daemon-tv8pg" (UID: "28103df6-de37-4b7f-b3e8-6ef03a0d1cfe") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:02:31.352446 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:31.352374 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z9rvc\" (UniqueName: \"kubernetes.io/projected/3d6007c8-6817-406b-894d-8f5fefd81911-kube-api-access-z9rvc\") pod \"network-check-target-9b2bn\" (UID: \"3d6007c8-6817-406b-894d-8f5fefd81911\") " pod="openshift-network-diagnostics/network-check-target-9b2bn" Apr 16 18:02:31.352619 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:31.352534 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 18:02:31.352619 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:31.352555 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 18:02:31.352619 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:31.352568 2571 projected.go:194] Error preparing data for projected volume kube-api-access-z9rvc for pod openshift-network-diagnostics/network-check-target-9b2bn: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:02:31.352794 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:31.352641 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3d6007c8-6817-406b-894d-8f5fefd81911-kube-api-access-z9rvc podName:3d6007c8-6817-406b-894d-8f5fefd81911 nodeName:}" failed. No retries permitted until 2026-04-16 18:02:39.352623948 +0000 UTC m=+17.181587062 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-z9rvc" (UniqueName: "kubernetes.io/projected/3d6007c8-6817-406b-894d-8f5fefd81911-kube-api-access-z9rvc") pod "network-check-target-9b2bn" (UID: "3d6007c8-6817-406b-894d-8f5fefd81911") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:02:31.710123 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:31.710026 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9b2bn" Apr 16 18:02:31.710306 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:31.710155 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9b2bn" podUID="3d6007c8-6817-406b-894d-8f5fefd81911" Apr 16 18:02:32.711414 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:32.711376 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv8pg" Apr 16 18:02:32.711976 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:32.711504 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv8pg" podUID="28103df6-de37-4b7f-b3e8-6ef03a0d1cfe" Apr 16 18:02:33.710838 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:33.710810 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9b2bn" Apr 16 18:02:33.710948 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:33.710909 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9b2bn" podUID="3d6007c8-6817-406b-894d-8f5fefd81911" Apr 16 18:02:34.710489 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:34.710452 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv8pg" Apr 16 18:02:34.710900 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:34.710609 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv8pg" podUID="28103df6-de37-4b7f-b3e8-6ef03a0d1cfe" Apr 16 18:02:35.710619 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:35.710574 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9b2bn" Apr 16 18:02:35.711032 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:35.710697 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9b2bn" podUID="3d6007c8-6817-406b-894d-8f5fefd81911" Apr 16 18:02:36.711016 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:36.710975 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv8pg" Apr 16 18:02:36.711423 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:36.711086 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv8pg" podUID="28103df6-de37-4b7f-b3e8-6ef03a0d1cfe" Apr 16 18:02:37.710617 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:37.710570 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9b2bn" Apr 16 18:02:37.710796 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:37.710698 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9b2bn" podUID="3d6007c8-6817-406b-894d-8f5fefd81911" Apr 16 18:02:38.710283 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:38.710249 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv8pg" Apr 16 18:02:38.710753 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:38.710390 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv8pg" podUID="28103df6-de37-4b7f-b3e8-6ef03a0d1cfe" Apr 16 18:02:39.307865 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:39.307821 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28103df6-de37-4b7f-b3e8-6ef03a0d1cfe-metrics-certs\") pod \"network-metrics-daemon-tv8pg\" (UID: \"28103df6-de37-4b7f-b3e8-6ef03a0d1cfe\") " pod="openshift-multus/network-metrics-daemon-tv8pg" Apr 16 18:02:39.308060 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:39.307999 2571 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:02:39.308108 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:39.308079 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28103df6-de37-4b7f-b3e8-6ef03a0d1cfe-metrics-certs podName:28103df6-de37-4b7f-b3e8-6ef03a0d1cfe nodeName:}" failed. No retries permitted until 2026-04-16 18:02:55.308058248 +0000 UTC m=+33.137021378 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/28103df6-de37-4b7f-b3e8-6ef03a0d1cfe-metrics-certs") pod "network-metrics-daemon-tv8pg" (UID: "28103df6-de37-4b7f-b3e8-6ef03a0d1cfe") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:02:39.408309 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:39.408276 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z9rvc\" (UniqueName: \"kubernetes.io/projected/3d6007c8-6817-406b-894d-8f5fefd81911-kube-api-access-z9rvc\") pod \"network-check-target-9b2bn\" (UID: \"3d6007c8-6817-406b-894d-8f5fefd81911\") " pod="openshift-network-diagnostics/network-check-target-9b2bn" Apr 16 18:02:39.408476 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:39.408403 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 18:02:39.408476 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:39.408418 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 18:02:39.408476 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:39.408429 2571 projected.go:194] Error preparing data for projected volume kube-api-access-z9rvc for pod openshift-network-diagnostics/network-check-target-9b2bn: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:02:39.408623 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:39.408481 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3d6007c8-6817-406b-894d-8f5fefd81911-kube-api-access-z9rvc podName:3d6007c8-6817-406b-894d-8f5fefd81911 nodeName:}" failed. No retries permitted until 2026-04-16 18:02:55.408463612 +0000 UTC m=+33.237426724 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-z9rvc" (UniqueName: "kubernetes.io/projected/3d6007c8-6817-406b-894d-8f5fefd81911-kube-api-access-z9rvc") pod "network-check-target-9b2bn" (UID: "3d6007c8-6817-406b-894d-8f5fefd81911") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:02:39.710612 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:39.710518 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9b2bn" Apr 16 18:02:39.710994 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:39.710676 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9b2bn" podUID="3d6007c8-6817-406b-894d-8f5fefd81911" Apr 16 18:02:40.710087 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:40.710056 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv8pg" Apr 16 18:02:40.710270 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:40.710164 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv8pg" podUID="28103df6-de37-4b7f-b3e8-6ef03a0d1cfe" Apr 16 18:02:41.709966 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:41.709945 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9b2bn" Apr 16 18:02:41.710216 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:41.710040 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9b2bn" podUID="3d6007c8-6817-406b-894d-8f5fefd81911" Apr 16 18:02:42.711880 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:42.711645 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv8pg" Apr 16 18:02:42.712681 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:42.712044 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv8pg" podUID="28103df6-de37-4b7f-b3e8-6ef03a0d1cfe" Apr 16 18:02:42.834959 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:42.834709 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8cd6l" event={"ID":"529cdc35-2ba8-48a7-8e7c-fefbb7c00f18","Type":"ContainerStarted","Data":"d6c0aae330bf7026e2ad288d1cdfc480f20f2ee34ee6136d813b129d1d753281"} Apr 16 18:02:42.836324 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:42.836255 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-82kk2" event={"ID":"433f58f3-2b64-4ade-a6d2-2016a24672b3","Type":"ContainerStarted","Data":"b1b903dd57792f026111576db734019635e6d66f5ef8108fb9b51c927a53f432"} Apr 16 18:02:42.837498 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:42.837470 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-128-59.ec2.internal" event={"ID":"b23767064b3798a2c7472b4227e16d3a","Type":"ContainerStarted","Data":"9b83d3bddc7b4cea159c1cb11896699c5aa9aaf34884467ed27faca45f3777e9"} Apr 16 18:02:42.838713 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:42.838693 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-fprmr" event={"ID":"a8e6ff0a-fab0-4399-9c0e-356c833b2958","Type":"ContainerStarted","Data":"10a75e5d2a218558cda3ebbd5e6f24fa7a0e14801bff301321e8891dec1e78be"} Apr 16 18:02:42.839797 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:42.839776 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" event={"ID":"208509f5-55d0-46f5-aebb-6497d6c3fa14","Type":"ContainerStarted","Data":"218bf941b5bb6591294b6b511cabe2446b094e83b15ffc4cc50ae24a8fe8c825"} Apr 16 18:02:42.841114 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:42.841096 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dztd6" event={"ID":"dfe6a446-b0d0-4f14-ab3f-bc468e461320","Type":"ContainerStarted","Data":"41ebef9cadb0cfaae61956b36886c436be98dbccb52714cd223621c94d18a871"} Apr 16 18:02:42.842293 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:42.842275 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-9s2wr" event={"ID":"a0c27f7e-2a1f-463d-bcfa-0c3f56beb85f","Type":"ContainerStarted","Data":"fceffb568f25beea2166b4eb2f7054bca55a5ae48f440756e90a95147c828a20"} Apr 16 18:02:42.843727 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:42.843706 2571 generic.go:358] "Generic (PLEG): container finished" podID="d7cc70a393babc6302c6cd1539b79c3b" containerID="a371dca3f53fd4a846f961f83b4eab181714dbea87784ec334f034c35159b522" exitCode=0 Apr 16 18:02:42.843802 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:42.843760 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-128-59.ec2.internal" event={"ID":"d7cc70a393babc6302c6cd1539b79c3b","Type":"ContainerDied","Data":"a371dca3f53fd4a846f961f83b4eab181714dbea87784ec334f034c35159b522"} Apr 16 18:02:42.845094 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:42.845076 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-5pm7k" event={"ID":"443d2e7a-08b9-4fa3-b1de-3c569b5764fd","Type":"ContainerStarted","Data":"267d6f3ae4c6209ab7e604662dea904fbb96357b1b6e530ab6c232ea080d8789"} Apr 16 18:02:42.847648 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:42.847631 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" event={"ID":"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f","Type":"ContainerStarted","Data":"8a6388ea28c75b1e5afaf8bc02ce04bbbc86344fc0880fe1f9c2e6ff5912454b"} Apr 16 18:02:42.847733 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:42.847654 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" event={"ID":"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f","Type":"ContainerStarted","Data":"aa0ce3a70fb98963c28abd72629ea1769661ceb1f9b576dfde685ca72b20674a"} Apr 16 18:02:42.847733 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:42.847668 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" event={"ID":"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f","Type":"ContainerStarted","Data":"c24a0fee196ca732b0d301c76aca58a198a55cb65ad42456d05dc2c5182313c2"} Apr 16 18:02:42.847733 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:42.847678 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" event={"ID":"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f","Type":"ContainerStarted","Data":"c6a925b8425a5ed20fed2a1e02503ce4b90f2674c7f8c117de95f19993c53f94"} Apr 16 18:02:42.847733 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:42.847690 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" event={"ID":"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f","Type":"ContainerStarted","Data":"15fe0bb2e045aa9841fd40a5c35136c353e66c9b0186b7e54f54411d6139b175"} Apr 16 18:02:42.847733 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:42.847702 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" event={"ID":"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f","Type":"ContainerStarted","Data":"6a1e55e48ea2e830ee904ac7fa6ee0b6e9d12e610e51c8ceb9ee7a704ce625a5"} Apr 16 18:02:42.970564 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:42.970513 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-dztd6" podStartSLOduration=3.12573419 podStartE2EDuration="20.970495478s" podCreationTimestamp="2026-04-16 18:02:22 +0000 UTC" firstStartedPulling="2026-04-16 18:02:23.920731192 +0000 UTC m=+1.749694307" lastFinishedPulling="2026-04-16 18:02:41.76549248 +0000 UTC m=+19.594455595" observedRunningTime="2026-04-16 18:02:42.969441464 +0000 UTC m=+20.798404794" watchObservedRunningTime="2026-04-16 18:02:42.970495478 +0000 UTC m=+20.799458611" Apr 16 18:02:43.010606 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:43.010552 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-5pm7k" podStartSLOduration=11.289366424 podStartE2EDuration="21.010538849s" podCreationTimestamp="2026-04-16 18:02:22 +0000 UTC" firstStartedPulling="2026-04-16 18:02:23.959375661 +0000 UTC m=+1.788338771" lastFinishedPulling="2026-04-16 18:02:33.68054807 +0000 UTC m=+11.509511196" observedRunningTime="2026-04-16 18:02:43.010254889 +0000 UTC m=+20.839218044" watchObservedRunningTime="2026-04-16 18:02:43.010538849 +0000 UTC m=+20.839501982" Apr 16 18:02:43.060905 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:43.060788 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/konnectivity-agent-9s2wr" podStartSLOduration=11.101231037 podStartE2EDuration="21.060775048s" podCreationTimestamp="2026-04-16 18:02:22 +0000 UTC" firstStartedPulling="2026-04-16 18:02:23.714013236 +0000 UTC m=+1.542976348" lastFinishedPulling="2026-04-16 18:02:33.673557244 +0000 UTC m=+11.502520359" observedRunningTime="2026-04-16 18:02:43.060581214 +0000 UTC m=+20.889544348" watchObservedRunningTime="2026-04-16 18:02:43.060775048 +0000 UTC m=+20.889738180" Apr 16 18:02:43.138722 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:43.138677 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-sqf5x" podStartSLOduration=3.330012617 podStartE2EDuration="21.138663588s" podCreationTimestamp="2026-04-16 18:02:22 +0000 UTC" firstStartedPulling="2026-04-16 18:02:23.945085764 +0000 UTC m=+1.774048875" lastFinishedPulling="2026-04-16 18:02:41.753736726 +0000 UTC m=+19.582699846" observedRunningTime="2026-04-16 18:02:43.138378487 +0000 UTC m=+20.967341630" watchObservedRunningTime="2026-04-16 18:02:43.138663588 +0000 UTC m=+20.967626722" Apr 16 18:02:43.138867 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:43.138762 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-82kk2" podStartSLOduration=7.9599885 podStartE2EDuration="21.138758137s" podCreationTimestamp="2026-04-16 18:02:22 +0000 UTC" firstStartedPulling="2026-04-16 18:02:23.894432054 +0000 UTC m=+1.723395168" lastFinishedPulling="2026-04-16 18:02:37.073201689 +0000 UTC m=+14.902164805" observedRunningTime="2026-04-16 18:02:43.091936857 +0000 UTC m=+20.920899991" watchObservedRunningTime="2026-04-16 18:02:43.138758137 +0000 UTC m=+20.967721269" Apr 16 18:02:43.186970 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:43.186929 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-proxy-ip-10-0-128-59.ec2.internal" podStartSLOduration=20.186916359 podStartE2EDuration="20.186916359s" podCreationTimestamp="2026-04-16 18:02:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:02:43.186835973 +0000 UTC m=+21.015799105" watchObservedRunningTime="2026-04-16 18:02:43.186916359 +0000 UTC m=+21.015879492" Apr 16 18:02:43.651021 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:43.650999 2571 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock" Apr 16 18:02:43.710152 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:43.710126 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9b2bn" Apr 16 18:02:43.710263 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:43.710227 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9b2bn" podUID="3d6007c8-6817-406b-894d-8f5fefd81911" Apr 16 18:02:43.850822 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:43.850746 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-128-59.ec2.internal" event={"ID":"d7cc70a393babc6302c6cd1539b79c3b","Type":"ContainerStarted","Data":"fd48028047920781f497bce8e63b64623d37d83ac18a5877e7f4adf11a336712"} Apr 16 18:02:43.851872 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:43.851850 2571 generic.go:358] "Generic (PLEG): container finished" podID="529cdc35-2ba8-48a7-8e7c-fefbb7c00f18" containerID="d6c0aae330bf7026e2ad288d1cdfc480f20f2ee34ee6136d813b129d1d753281" exitCode=0 Apr 16 18:02:43.851968 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:43.851918 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8cd6l" event={"ID":"529cdc35-2ba8-48a7-8e7c-fefbb7c00f18","Type":"ContainerDied","Data":"d6c0aae330bf7026e2ad288d1cdfc480f20f2ee34ee6136d813b129d1d753281"} Apr 16 18:02:43.853414 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:43.853395 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-fprmr" event={"ID":"a8e6ff0a-fab0-4399-9c0e-356c833b2958","Type":"ContainerStarted","Data":"aa9763c1e9853273a4b8d93b371654c135ef8ec2af955683e9fa490f3ea7cf57"} Apr 16 18:02:43.854688 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:43.854661 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-jw52k" event={"ID":"96576415-5af9-4a8f-a718-72470bf1a7d9","Type":"ContainerStarted","Data":"adf845de8dadda49e89b53a4c49744990e8f58ef17c849a16ad5157489916507"} Apr 16 18:02:43.870916 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:43.870877 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-128-59.ec2.internal" podStartSLOduration=20.870864337 podStartE2EDuration="20.870864337s" podCreationTimestamp="2026-04-16 18:02:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:02:43.870661961 +0000 UTC m=+21.699625103" watchObservedRunningTime="2026-04-16 18:02:43.870864337 +0000 UTC m=+21.699827470" Apr 16 18:02:43.889770 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:43.889730 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/iptables-alerter-jw52k" podStartSLOduration=4.117531186 podStartE2EDuration="21.889719487s" podCreationTimestamp="2026-04-16 18:02:22 +0000 UTC" firstStartedPulling="2026-04-16 18:02:23.978668396 +0000 UTC m=+1.807631507" lastFinishedPulling="2026-04-16 18:02:41.75085668 +0000 UTC m=+19.579819808" observedRunningTime="2026-04-16 18:02:43.889425423 +0000 UTC m=+21.718388556" watchObservedRunningTime="2026-04-16 18:02:43.889719487 +0000 UTC m=+21.718682619" Apr 16 18:02:44.624482 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:44.624448 2571 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/konnectivity-agent-9s2wr" Apr 16 18:02:44.625106 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:44.625086 2571 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/konnectivity-agent-9s2wr" Apr 16 18:02:44.643946 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:44.643801 2571 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock","Timestamp":"2026-04-16T18:02:43.651016174Z","UUID":"44a66d34-001d-4feb-be13-6f181d45d9f3","Handler":null,"Name":"","Endpoint":""} Apr 16 18:02:44.647014 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:44.646987 2571 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: ebs.csi.aws.com endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock versions: 1.0.0 Apr 16 18:02:44.647014 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:44.647020 2571 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: ebs.csi.aws.com at endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock Apr 16 18:02:44.710748 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:44.710721 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv8pg" Apr 16 18:02:44.710905 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:44.710857 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv8pg" podUID="28103df6-de37-4b7f-b3e8-6ef03a0d1cfe" Apr 16 18:02:44.859164 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:44.859126 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" event={"ID":"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f","Type":"ContainerStarted","Data":"2e9e23d7a8f43741ac922ea0d9ebb1e98052d6bc8698747d6bbeb4294b6d115c"} Apr 16 18:02:44.861277 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:44.861203 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-fprmr" event={"ID":"a8e6ff0a-fab0-4399-9c0e-356c833b2958","Type":"ContainerStarted","Data":"d3e8a938c3b01548bc0990ca74f30ae3e81bc0a2080475a60ffeb2d77a1bb1b2"} Apr 16 18:02:44.861876 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:44.861542 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kube-system/konnectivity-agent-9s2wr" Apr 16 18:02:44.862027 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:44.862008 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/konnectivity-agent-9s2wr" Apr 16 18:02:44.880036 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:44.879992 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-fprmr" podStartSLOduration=2.512224796 podStartE2EDuration="22.879981597s" podCreationTimestamp="2026-04-16 18:02:22 +0000 UTC" firstStartedPulling="2026-04-16 18:02:24.019906422 +0000 UTC m=+1.848869534" lastFinishedPulling="2026-04-16 18:02:44.387663204 +0000 UTC m=+22.216626335" observedRunningTime="2026-04-16 18:02:44.879507665 +0000 UTC m=+22.708470802" watchObservedRunningTime="2026-04-16 18:02:44.879981597 +0000 UTC m=+22.708944730" Apr 16 18:02:45.709986 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:45.709951 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9b2bn" Apr 16 18:02:45.710158 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:45.710079 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9b2bn" podUID="3d6007c8-6817-406b-894d-8f5fefd81911" Apr 16 18:02:46.710369 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:46.710333 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv8pg" Apr 16 18:02:46.710769 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:46.710469 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv8pg" podUID="28103df6-de37-4b7f-b3e8-6ef03a0d1cfe" Apr 16 18:02:47.710528 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:47.710493 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9b2bn" Apr 16 18:02:47.711053 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:47.710625 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9b2bn" podUID="3d6007c8-6817-406b-894d-8f5fefd81911" Apr 16 18:02:47.870121 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:47.869976 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" event={"ID":"bf2455ff-3f2f-4b0a-9d79-994f43be7b2f","Type":"ContainerStarted","Data":"a4a18403272faee8cd31539fa52fcc68079d09957d7797731cd5f632a8862250"} Apr 16 18:02:48.710327 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:48.710295 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv8pg" Apr 16 18:02:48.710669 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:48.710422 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv8pg" podUID="28103df6-de37-4b7f-b3e8-6ef03a0d1cfe" Apr 16 18:02:48.872885 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:48.872851 2571 generic.go:358] "Generic (PLEG): container finished" podID="529cdc35-2ba8-48a7-8e7c-fefbb7c00f18" containerID="9dbcf2552ea8b0b8f90e46e3369f9c3a9e113f02cd2bcdf6b40835ecac06fa23" exitCode=0 Apr 16 18:02:48.873071 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:48.872942 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8cd6l" event={"ID":"529cdc35-2ba8-48a7-8e7c-fefbb7c00f18","Type":"ContainerDied","Data":"9dbcf2552ea8b0b8f90e46e3369f9c3a9e113f02cd2bcdf6b40835ecac06fa23"} Apr 16 18:02:48.873355 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:48.873338 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:48.888757 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:48.888734 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:48.919073 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:48.919033 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" podStartSLOduration=8.714033675 podStartE2EDuration="26.919020771s" podCreationTimestamp="2026-04-16 18:02:22 +0000 UTC" firstStartedPulling="2026-04-16 18:02:23.932702818 +0000 UTC m=+1.761665929" lastFinishedPulling="2026-04-16 18:02:42.137689908 +0000 UTC m=+19.966653025" observedRunningTime="2026-04-16 18:02:48.918550331 +0000 UTC m=+26.747513456" watchObservedRunningTime="2026-04-16 18:02:48.919020771 +0000 UTC m=+26.747983904" Apr 16 18:02:49.710567 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:49.710375 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9b2bn" Apr 16 18:02:49.710713 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:49.710673 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9b2bn" podUID="3d6007c8-6817-406b-894d-8f5fefd81911" Apr 16 18:02:49.810110 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:49.810081 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-9b2bn"] Apr 16 18:02:49.813293 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:49.813270 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-tv8pg"] Apr 16 18:02:49.813394 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:49.813381 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv8pg" Apr 16 18:02:49.813561 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:49.813537 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv8pg" podUID="28103df6-de37-4b7f-b3e8-6ef03a0d1cfe" Apr 16 18:02:49.876858 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:49.876762 2571 generic.go:358] "Generic (PLEG): container finished" podID="529cdc35-2ba8-48a7-8e7c-fefbb7c00f18" containerID="3713d62237a1a6febaee50146dae45f1e6c2279181efbc6b2737df35c3a99456" exitCode=0 Apr 16 18:02:49.876858 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:49.876848 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9b2bn" Apr 16 18:02:49.877021 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:49.876859 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8cd6l" event={"ID":"529cdc35-2ba8-48a7-8e7c-fefbb7c00f18","Type":"ContainerDied","Data":"3713d62237a1a6febaee50146dae45f1e6c2279181efbc6b2737df35c3a99456"} Apr 16 18:02:49.877099 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:49.877079 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9b2bn" podUID="3d6007c8-6817-406b-894d-8f5fefd81911" Apr 16 18:02:49.878061 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:49.877419 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:49.878061 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:49.877458 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:49.891048 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:49.891028 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:02:50.880320 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:50.880280 2571 generic.go:358] "Generic (PLEG): container finished" podID="529cdc35-2ba8-48a7-8e7c-fefbb7c00f18" containerID="ed58dacfadb348db89b12644207f3a487ccced471e88b49c095aded9cd34d49f" exitCode=0 Apr 16 18:02:50.880774 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:50.880368 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8cd6l" event={"ID":"529cdc35-2ba8-48a7-8e7c-fefbb7c00f18","Type":"ContainerDied","Data":"ed58dacfadb348db89b12644207f3a487ccced471e88b49c095aded9cd34d49f"} Apr 16 18:02:51.710282 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:51.710196 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9b2bn" Apr 16 18:02:51.710282 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:51.710245 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv8pg" Apr 16 18:02:51.710485 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:51.710320 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9b2bn" podUID="3d6007c8-6817-406b-894d-8f5fefd81911" Apr 16 18:02:51.710485 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:51.710431 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv8pg" podUID="28103df6-de37-4b7f-b3e8-6ef03a0d1cfe" Apr 16 18:02:53.710743 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:53.710706 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv8pg" Apr 16 18:02:53.711186 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:53.710743 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9b2bn" Apr 16 18:02:53.711186 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:53.710850 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv8pg" podUID="28103df6-de37-4b7f-b3e8-6ef03a0d1cfe" Apr 16 18:02:53.711186 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:53.710978 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-9b2bn" podUID="3d6007c8-6817-406b-894d-8f5fefd81911" Apr 16 18:02:55.057778 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.057697 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-128-59.ec2.internal" event="NodeReady" Apr 16 18:02:55.058262 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.057845 2571 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Apr 16 18:02:55.131923 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.131879 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-7d8x4"] Apr 16 18:02:55.162971 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.162945 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-pzcv4"] Apr 16 18:02:55.163151 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.163126 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-7d8x4" Apr 16 18:02:55.166242 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.166220 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Apr 16 18:02:55.166973 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.166956 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Apr 16 18:02:55.166973 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.166965 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-j5p9z\"" Apr 16 18:02:55.188457 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.188432 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-pzcv4"] Apr 16 18:02:55.188457 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.188460 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-7d8x4"] Apr 16 18:02:55.188638 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.188559 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pzcv4" Apr 16 18:02:55.192481 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.192455 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Apr 16 18:02:55.192579 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.192479 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Apr 16 18:02:55.192579 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.192492 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-dxq2x\"" Apr 16 18:02:55.192579 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.192524 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Apr 16 18:02:55.325879 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.325786 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc0e7134-4a00-4ae4-9fdd-e97a96de72f4-metrics-tls\") pod \"dns-default-7d8x4\" (UID: \"cc0e7134-4a00-4ae4-9fdd-e97a96de72f4\") " pod="openshift-dns/dns-default-7d8x4" Apr 16 18:02:55.325879 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.325849 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2thg\" (UniqueName: \"kubernetes.io/projected/5e6f71fd-65bf-41c3-b270-e42236bbe730-kube-api-access-n2thg\") pod \"ingress-canary-pzcv4\" (UID: \"5e6f71fd-65bf-41c3-b270-e42236bbe730\") " pod="openshift-ingress-canary/ingress-canary-pzcv4" Apr 16 18:02:55.326100 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.325927 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg2m8\" (UniqueName: \"kubernetes.io/projected/cc0e7134-4a00-4ae4-9fdd-e97a96de72f4-kube-api-access-vg2m8\") pod \"dns-default-7d8x4\" (UID: \"cc0e7134-4a00-4ae4-9fdd-e97a96de72f4\") " pod="openshift-dns/dns-default-7d8x4" Apr 16 18:02:55.326100 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.326021 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e6f71fd-65bf-41c3-b270-e42236bbe730-cert\") pod \"ingress-canary-pzcv4\" (UID: \"5e6f71fd-65bf-41c3-b270-e42236bbe730\") " pod="openshift-ingress-canary/ingress-canary-pzcv4" Apr 16 18:02:55.326100 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.326043 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc0e7134-4a00-4ae4-9fdd-e97a96de72f4-config-volume\") pod \"dns-default-7d8x4\" (UID: \"cc0e7134-4a00-4ae4-9fdd-e97a96de72f4\") " pod="openshift-dns/dns-default-7d8x4" Apr 16 18:02:55.326100 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.326059 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cc0e7134-4a00-4ae4-9fdd-e97a96de72f4-tmp-dir\") pod \"dns-default-7d8x4\" (UID: \"cc0e7134-4a00-4ae4-9fdd-e97a96de72f4\") " pod="openshift-dns/dns-default-7d8x4" Apr 16 18:02:55.326100 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.326081 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28103df6-de37-4b7f-b3e8-6ef03a0d1cfe-metrics-certs\") pod \"network-metrics-daemon-tv8pg\" (UID: \"28103df6-de37-4b7f-b3e8-6ef03a0d1cfe\") " pod="openshift-multus/network-metrics-daemon-tv8pg" Apr 16 18:02:55.326320 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:55.326234 2571 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:02:55.326320 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:55.326312 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28103df6-de37-4b7f-b3e8-6ef03a0d1cfe-metrics-certs podName:28103df6-de37-4b7f-b3e8-6ef03a0d1cfe nodeName:}" failed. No retries permitted until 2026-04-16 18:03:27.326290806 +0000 UTC m=+65.155253937 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/28103df6-de37-4b7f-b3e8-6ef03a0d1cfe-metrics-certs") pod "network-metrics-daemon-tv8pg" (UID: "28103df6-de37-4b7f-b3e8-6ef03a0d1cfe") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:02:55.426637 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.426599 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc0e7134-4a00-4ae4-9fdd-e97a96de72f4-metrics-tls\") pod \"dns-default-7d8x4\" (UID: \"cc0e7134-4a00-4ae4-9fdd-e97a96de72f4\") " pod="openshift-dns/dns-default-7d8x4" Apr 16 18:02:55.426821 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.426657 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n2thg\" (UniqueName: \"kubernetes.io/projected/5e6f71fd-65bf-41c3-b270-e42236bbe730-kube-api-access-n2thg\") pod \"ingress-canary-pzcv4\" (UID: \"5e6f71fd-65bf-41c3-b270-e42236bbe730\") " pod="openshift-ingress-canary/ingress-canary-pzcv4" Apr 16 18:02:55.426821 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.426681 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vg2m8\" (UniqueName: \"kubernetes.io/projected/cc0e7134-4a00-4ae4-9fdd-e97a96de72f4-kube-api-access-vg2m8\") pod \"dns-default-7d8x4\" (UID: \"cc0e7134-4a00-4ae4-9fdd-e97a96de72f4\") " pod="openshift-dns/dns-default-7d8x4" Apr 16 18:02:55.426821 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.426710 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z9rvc\" (UniqueName: \"kubernetes.io/projected/3d6007c8-6817-406b-894d-8f5fefd81911-kube-api-access-z9rvc\") pod \"network-check-target-9b2bn\" (UID: \"3d6007c8-6817-406b-894d-8f5fefd81911\") " pod="openshift-network-diagnostics/network-check-target-9b2bn" Apr 16 18:02:55.426821 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:55.426719 2571 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 18:02:55.426821 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:55.426784 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc0e7134-4a00-4ae4-9fdd-e97a96de72f4-metrics-tls podName:cc0e7134-4a00-4ae4-9fdd-e97a96de72f4 nodeName:}" failed. No retries permitted until 2026-04-16 18:02:55.926762535 +0000 UTC m=+33.755725651 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/cc0e7134-4a00-4ae4-9fdd-e97a96de72f4-metrics-tls") pod "dns-default-7d8x4" (UID: "cc0e7134-4a00-4ae4-9fdd-e97a96de72f4") : secret "dns-default-metrics-tls" not found Apr 16 18:02:55.426821 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.426803 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e6f71fd-65bf-41c3-b270-e42236bbe730-cert\") pod \"ingress-canary-pzcv4\" (UID: \"5e6f71fd-65bf-41c3-b270-e42236bbe730\") " pod="openshift-ingress-canary/ingress-canary-pzcv4" Apr 16 18:02:55.426821 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:55.426812 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 18:02:55.426821 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:55.426826 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 18:02:55.427220 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.426832 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc0e7134-4a00-4ae4-9fdd-e97a96de72f4-config-volume\") pod \"dns-default-7d8x4\" (UID: \"cc0e7134-4a00-4ae4-9fdd-e97a96de72f4\") " pod="openshift-dns/dns-default-7d8x4" Apr 16 18:02:55.427220 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.426859 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cc0e7134-4a00-4ae4-9fdd-e97a96de72f4-tmp-dir\") pod \"dns-default-7d8x4\" (UID: \"cc0e7134-4a00-4ae4-9fdd-e97a96de72f4\") " pod="openshift-dns/dns-default-7d8x4" Apr 16 18:02:55.427220 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:55.426838 2571 projected.go:194] Error preparing data for projected volume kube-api-access-z9rvc for pod openshift-network-diagnostics/network-check-target-9b2bn: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:02:55.427220 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:55.426932 2571 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 18:02:55.427220 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:55.427004 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3d6007c8-6817-406b-894d-8f5fefd81911-kube-api-access-z9rvc podName:3d6007c8-6817-406b-894d-8f5fefd81911 nodeName:}" failed. No retries permitted until 2026-04-16 18:03:27.426985636 +0000 UTC m=+65.255948762 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-z9rvc" (UniqueName: "kubernetes.io/projected/3d6007c8-6817-406b-894d-8f5fefd81911-kube-api-access-z9rvc") pod "network-check-target-9b2bn" (UID: "3d6007c8-6817-406b-894d-8f5fefd81911") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:02:55.427220 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:55.427024 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e6f71fd-65bf-41c3-b270-e42236bbe730-cert podName:5e6f71fd-65bf-41c3-b270-e42236bbe730 nodeName:}" failed. No retries permitted until 2026-04-16 18:02:55.927015281 +0000 UTC m=+33.755978393 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5e6f71fd-65bf-41c3-b270-e42236bbe730-cert") pod "ingress-canary-pzcv4" (UID: "5e6f71fd-65bf-41c3-b270-e42236bbe730") : secret "canary-serving-cert" not found Apr 16 18:02:55.427436 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.427245 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cc0e7134-4a00-4ae4-9fdd-e97a96de72f4-tmp-dir\") pod \"dns-default-7d8x4\" (UID: \"cc0e7134-4a00-4ae4-9fdd-e97a96de72f4\") " pod="openshift-dns/dns-default-7d8x4" Apr 16 18:02:55.427436 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.427363 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc0e7134-4a00-4ae4-9fdd-e97a96de72f4-config-volume\") pod \"dns-default-7d8x4\" (UID: \"cc0e7134-4a00-4ae4-9fdd-e97a96de72f4\") " pod="openshift-dns/dns-default-7d8x4" Apr 16 18:02:55.442752 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.442724 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg2m8\" (UniqueName: \"kubernetes.io/projected/cc0e7134-4a00-4ae4-9fdd-e97a96de72f4-kube-api-access-vg2m8\") pod \"dns-default-7d8x4\" (UID: \"cc0e7134-4a00-4ae4-9fdd-e97a96de72f4\") " pod="openshift-dns/dns-default-7d8x4" Apr 16 18:02:55.442902 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.442882 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2thg\" (UniqueName: \"kubernetes.io/projected/5e6f71fd-65bf-41c3-b270-e42236bbe730-kube-api-access-n2thg\") pod \"ingress-canary-pzcv4\" (UID: \"5e6f71fd-65bf-41c3-b270-e42236bbe730\") " pod="openshift-ingress-canary/ingress-canary-pzcv4" Apr 16 18:02:55.710498 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.710461 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv8pg" Apr 16 18:02:55.710710 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.710467 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9b2bn" Apr 16 18:02:55.713242 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.713217 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 16 18:02:55.713242 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.713238 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-fs95d\"" Apr 16 18:02:55.713638 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.713615 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 16 18:02:55.714242 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.714222 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-jt7dn\"" Apr 16 18:02:55.714679 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.714648 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 16 18:02:55.930062 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.930014 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc0e7134-4a00-4ae4-9fdd-e97a96de72f4-metrics-tls\") pod \"dns-default-7d8x4\" (UID: \"cc0e7134-4a00-4ae4-9fdd-e97a96de72f4\") " pod="openshift-dns/dns-default-7d8x4" Apr 16 18:02:55.930258 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:55.930097 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e6f71fd-65bf-41c3-b270-e42236bbe730-cert\") pod \"ingress-canary-pzcv4\" (UID: \"5e6f71fd-65bf-41c3-b270-e42236bbe730\") " pod="openshift-ingress-canary/ingress-canary-pzcv4" Apr 16 18:02:55.930258 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:55.930185 2571 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 18:02:55.930258 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:55.930201 2571 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 18:02:55.930443 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:55.930263 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc0e7134-4a00-4ae4-9fdd-e97a96de72f4-metrics-tls podName:cc0e7134-4a00-4ae4-9fdd-e97a96de72f4 nodeName:}" failed. No retries permitted until 2026-04-16 18:02:56.930240723 +0000 UTC m=+34.759203839 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/cc0e7134-4a00-4ae4-9fdd-e97a96de72f4-metrics-tls") pod "dns-default-7d8x4" (UID: "cc0e7134-4a00-4ae4-9fdd-e97a96de72f4") : secret "dns-default-metrics-tls" not found Apr 16 18:02:55.930443 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:55.930283 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e6f71fd-65bf-41c3-b270-e42236bbe730-cert podName:5e6f71fd-65bf-41c3-b270-e42236bbe730 nodeName:}" failed. No retries permitted until 2026-04-16 18:02:56.930273664 +0000 UTC m=+34.759236780 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5e6f71fd-65bf-41c3-b270-e42236bbe730-cert") pod "ingress-canary-pzcv4" (UID: "5e6f71fd-65bf-41c3-b270-e42236bbe730") : secret "canary-serving-cert" not found Apr 16 18:02:56.938432 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:56.938402 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e6f71fd-65bf-41c3-b270-e42236bbe730-cert\") pod \"ingress-canary-pzcv4\" (UID: \"5e6f71fd-65bf-41c3-b270-e42236bbe730\") " pod="openshift-ingress-canary/ingress-canary-pzcv4" Apr 16 18:02:56.938805 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:56.938456 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc0e7134-4a00-4ae4-9fdd-e97a96de72f4-metrics-tls\") pod \"dns-default-7d8x4\" (UID: \"cc0e7134-4a00-4ae4-9fdd-e97a96de72f4\") " pod="openshift-dns/dns-default-7d8x4" Apr 16 18:02:56.938805 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:56.938541 2571 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 18:02:56.938805 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:56.938545 2571 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 18:02:56.938805 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:56.938612 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc0e7134-4a00-4ae4-9fdd-e97a96de72f4-metrics-tls podName:cc0e7134-4a00-4ae4-9fdd-e97a96de72f4 nodeName:}" failed. No retries permitted until 2026-04-16 18:02:58.938578915 +0000 UTC m=+36.767542025 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/cc0e7134-4a00-4ae4-9fdd-e97a96de72f4-metrics-tls") pod "dns-default-7d8x4" (UID: "cc0e7134-4a00-4ae4-9fdd-e97a96de72f4") : secret "dns-default-metrics-tls" not found Apr 16 18:02:56.938805 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:56.938626 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e6f71fd-65bf-41c3-b270-e42236bbe730-cert podName:5e6f71fd-65bf-41c3-b270-e42236bbe730 nodeName:}" failed. No retries permitted until 2026-04-16 18:02:58.938620511 +0000 UTC m=+36.767583623 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5e6f71fd-65bf-41c3-b270-e42236bbe730-cert") pod "ingress-canary-pzcv4" (UID: "5e6f71fd-65bf-41c3-b270-e42236bbe730") : secret "canary-serving-cert" not found Apr 16 18:02:57.907576 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:57.907352 2571 generic.go:358] "Generic (PLEG): container finished" podID="529cdc35-2ba8-48a7-8e7c-fefbb7c00f18" containerID="2538d9f9cf59eed7b787d10a6e91012fac82ad616a067455b78912230684deda" exitCode=0 Apr 16 18:02:57.907729 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:57.907430 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8cd6l" event={"ID":"529cdc35-2ba8-48a7-8e7c-fefbb7c00f18","Type":"ContainerDied","Data":"2538d9f9cf59eed7b787d10a6e91012fac82ad616a067455b78912230684deda"} Apr 16 18:02:58.911722 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:58.911693 2571 generic.go:358] "Generic (PLEG): container finished" podID="529cdc35-2ba8-48a7-8e7c-fefbb7c00f18" containerID="34635f17e8ba054427977b09419948d2f3900d44359c7d7d49e3c1cb850d3036" exitCode=0 Apr 16 18:02:58.912209 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:58.911730 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8cd6l" event={"ID":"529cdc35-2ba8-48a7-8e7c-fefbb7c00f18","Type":"ContainerDied","Data":"34635f17e8ba054427977b09419948d2f3900d44359c7d7d49e3c1cb850d3036"} Apr 16 18:02:58.953804 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:58.953730 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e6f71fd-65bf-41c3-b270-e42236bbe730-cert\") pod \"ingress-canary-pzcv4\" (UID: \"5e6f71fd-65bf-41c3-b270-e42236bbe730\") " pod="openshift-ingress-canary/ingress-canary-pzcv4" Apr 16 18:02:58.953947 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:58.953823 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc0e7134-4a00-4ae4-9fdd-e97a96de72f4-metrics-tls\") pod \"dns-default-7d8x4\" (UID: \"cc0e7134-4a00-4ae4-9fdd-e97a96de72f4\") " pod="openshift-dns/dns-default-7d8x4" Apr 16 18:02:58.953947 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:58.953901 2571 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 18:02:58.954048 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:58.953951 2571 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 18:02:58.954048 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:58.954012 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e6f71fd-65bf-41c3-b270-e42236bbe730-cert podName:5e6f71fd-65bf-41c3-b270-e42236bbe730 nodeName:}" failed. No retries permitted until 2026-04-16 18:03:02.953968819 +0000 UTC m=+40.782931947 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5e6f71fd-65bf-41c3-b270-e42236bbe730-cert") pod "ingress-canary-pzcv4" (UID: "5e6f71fd-65bf-41c3-b270-e42236bbe730") : secret "canary-serving-cert" not found Apr 16 18:02:58.954048 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:02:58.954033 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc0e7134-4a00-4ae4-9fdd-e97a96de72f4-metrics-tls podName:cc0e7134-4a00-4ae4-9fdd-e97a96de72f4 nodeName:}" failed. No retries permitted until 2026-04-16 18:03:02.954023382 +0000 UTC m=+40.782986496 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/cc0e7134-4a00-4ae4-9fdd-e97a96de72f4-metrics-tls") pod "dns-default-7d8x4" (UID: "cc0e7134-4a00-4ae4-9fdd-e97a96de72f4") : secret "dns-default-metrics-tls" not found Apr 16 18:02:59.916090 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:59.916048 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8cd6l" event={"ID":"529cdc35-2ba8-48a7-8e7c-fefbb7c00f18","Type":"ContainerStarted","Data":"78705573cb4ea525f9c21388de873bd74bb13e6d107fff890c0eacd7378d99c1"} Apr 16 18:02:59.938942 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:02:59.938892 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-8cd6l" podStartSLOduration=4.275238488 podStartE2EDuration="37.938880227s" podCreationTimestamp="2026-04-16 18:02:22 +0000 UTC" firstStartedPulling="2026-04-16 18:02:23.906174081 +0000 UTC m=+1.735137193" lastFinishedPulling="2026-04-16 18:02:57.569815806 +0000 UTC m=+35.398778932" observedRunningTime="2026-04-16 18:02:59.93824544 +0000 UTC m=+37.767208573" watchObservedRunningTime="2026-04-16 18:02:59.938880227 +0000 UTC m=+37.767843360" Apr 16 18:03:02.465490 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:02.465459 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-9d9c8b845-7hq66"] Apr 16 18:03:02.496753 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:02.496723 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-9d9c8b845-7hq66"] Apr 16 18:03:02.496880 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:02.496819 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-9d9c8b845-7hq66" Apr 16 18:03:02.500353 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:02.500333 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"managed-serviceaccount-dockercfg-srxdb\"" Apr 16 18:03:02.500900 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:02.500880 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"open-cluster-management-image-pull-credentials\"" Apr 16 18:03:02.500990 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:02.500939 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"open-cluster-management-agent-addon\"/\"openshift-service-ca.crt\"" Apr 16 18:03:02.500990 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:02.500954 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"open-cluster-management-agent-addon\"/\"kube-root-ca.crt\"" Apr 16 18:03:02.500990 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:02.500961 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"managed-serviceaccount-hub-kubeconfig\"" Apr 16 18:03:02.512220 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:02.512200 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-5b4c859c74-p5rwc"] Apr 16 18:03:02.546816 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:02.546735 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-5b4c859c74-p5rwc"] Apr 16 18:03:02.546898 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:02.546819 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5b4c859c74-p5rwc" Apr 16 18:03:02.550537 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:02.550455 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"work-manager-hub-kubeconfig\"" Apr 16 18:03:02.575346 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:02.575320 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfqzr\" (UniqueName: \"kubernetes.io/projected/4d627cc4-1505-4932-93b6-509c446670ce-kube-api-access-cfqzr\") pod \"managed-serviceaccount-addon-agent-9d9c8b845-7hq66\" (UID: \"4d627cc4-1505-4932-93b6-509c446670ce\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-9d9c8b845-7hq66" Apr 16 18:03:02.575442 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:02.575384 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/4d627cc4-1505-4932-93b6-509c446670ce-hub-kubeconfig\") pod \"managed-serviceaccount-addon-agent-9d9c8b845-7hq66\" (UID: \"4d627cc4-1505-4932-93b6-509c446670ce\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-9d9c8b845-7hq66" Apr 16 18:03:02.676136 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:02.676108 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cfqzr\" (UniqueName: \"kubernetes.io/projected/4d627cc4-1505-4932-93b6-509c446670ce-kube-api-access-cfqzr\") pod \"managed-serviceaccount-addon-agent-9d9c8b845-7hq66\" (UID: \"4d627cc4-1505-4932-93b6-509c446670ce\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-9d9c8b845-7hq66" Apr 16 18:03:02.676257 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:02.676146 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/05049297-92d0-44f1-a940-b679011a1fcd-klusterlet-config\") pod \"klusterlet-addon-workmgr-5b4c859c74-p5rwc\" (UID: \"05049297-92d0-44f1-a940-b679011a1fcd\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5b4c859c74-p5rwc" Apr 16 18:03:02.676257 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:02.676172 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/05049297-92d0-44f1-a940-b679011a1fcd-tmp\") pod \"klusterlet-addon-workmgr-5b4c859c74-p5rwc\" (UID: \"05049297-92d0-44f1-a940-b679011a1fcd\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5b4c859c74-p5rwc" Apr 16 18:03:02.676257 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:02.676219 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b2rn\" (UniqueName: \"kubernetes.io/projected/05049297-92d0-44f1-a940-b679011a1fcd-kube-api-access-8b2rn\") pod \"klusterlet-addon-workmgr-5b4c859c74-p5rwc\" (UID: \"05049297-92d0-44f1-a940-b679011a1fcd\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5b4c859c74-p5rwc" Apr 16 18:03:02.676370 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:02.676299 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/4d627cc4-1505-4932-93b6-509c446670ce-hub-kubeconfig\") pod \"managed-serviceaccount-addon-agent-9d9c8b845-7hq66\" (UID: \"4d627cc4-1505-4932-93b6-509c446670ce\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-9d9c8b845-7hq66" Apr 16 18:03:02.679274 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:02.679254 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/4d627cc4-1505-4932-93b6-509c446670ce-hub-kubeconfig\") pod \"managed-serviceaccount-addon-agent-9d9c8b845-7hq66\" (UID: \"4d627cc4-1505-4932-93b6-509c446670ce\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-9d9c8b845-7hq66" Apr 16 18:03:02.685362 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:02.685341 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfqzr\" (UniqueName: \"kubernetes.io/projected/4d627cc4-1505-4932-93b6-509c446670ce-kube-api-access-cfqzr\") pod \"managed-serviceaccount-addon-agent-9d9c8b845-7hq66\" (UID: \"4d627cc4-1505-4932-93b6-509c446670ce\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-9d9c8b845-7hq66" Apr 16 18:03:02.777458 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:02.777383 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/05049297-92d0-44f1-a940-b679011a1fcd-klusterlet-config\") pod \"klusterlet-addon-workmgr-5b4c859c74-p5rwc\" (UID: \"05049297-92d0-44f1-a940-b679011a1fcd\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5b4c859c74-p5rwc" Apr 16 18:03:02.777458 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:02.777414 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/05049297-92d0-44f1-a940-b679011a1fcd-tmp\") pod \"klusterlet-addon-workmgr-5b4c859c74-p5rwc\" (UID: \"05049297-92d0-44f1-a940-b679011a1fcd\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5b4c859c74-p5rwc" Apr 16 18:03:02.777458 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:02.777453 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8b2rn\" (UniqueName: \"kubernetes.io/projected/05049297-92d0-44f1-a940-b679011a1fcd-kube-api-access-8b2rn\") pod \"klusterlet-addon-workmgr-5b4c859c74-p5rwc\" (UID: \"05049297-92d0-44f1-a940-b679011a1fcd\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5b4c859c74-p5rwc" Apr 16 18:03:02.803548 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:02.803521 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/05049297-92d0-44f1-a940-b679011a1fcd-tmp\") pod \"klusterlet-addon-workmgr-5b4c859c74-p5rwc\" (UID: \"05049297-92d0-44f1-a940-b679011a1fcd\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5b4c859c74-p5rwc" Apr 16 18:03:02.803849 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:02.803834 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/05049297-92d0-44f1-a940-b679011a1fcd-klusterlet-config\") pod \"klusterlet-addon-workmgr-5b4c859c74-p5rwc\" (UID: \"05049297-92d0-44f1-a940-b679011a1fcd\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5b4c859c74-p5rwc" Apr 16 18:03:02.803900 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:02.803849 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b2rn\" (UniqueName: \"kubernetes.io/projected/05049297-92d0-44f1-a940-b679011a1fcd-kube-api-access-8b2rn\") pod \"klusterlet-addon-workmgr-5b4c859c74-p5rwc\" (UID: \"05049297-92d0-44f1-a940-b679011a1fcd\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5b4c859c74-p5rwc" Apr 16 18:03:02.815037 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:02.815020 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-9d9c8b845-7hq66" Apr 16 18:03:02.854697 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:02.854666 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5b4c859c74-p5rwc" Apr 16 18:03:02.979170 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:02.979138 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc0e7134-4a00-4ae4-9fdd-e97a96de72f4-metrics-tls\") pod \"dns-default-7d8x4\" (UID: \"cc0e7134-4a00-4ae4-9fdd-e97a96de72f4\") " pod="openshift-dns/dns-default-7d8x4" Apr 16 18:03:02.979309 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:02.979216 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e6f71fd-65bf-41c3-b270-e42236bbe730-cert\") pod \"ingress-canary-pzcv4\" (UID: \"5e6f71fd-65bf-41c3-b270-e42236bbe730\") " pod="openshift-ingress-canary/ingress-canary-pzcv4" Apr 16 18:03:02.979309 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:03:02.979291 2571 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 18:03:02.979413 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:03:02.979346 2571 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 18:03:02.979413 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:03:02.979351 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc0e7134-4a00-4ae4-9fdd-e97a96de72f4-metrics-tls podName:cc0e7134-4a00-4ae4-9fdd-e97a96de72f4 nodeName:}" failed. No retries permitted until 2026-04-16 18:03:10.979333828 +0000 UTC m=+48.808296947 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/cc0e7134-4a00-4ae4-9fdd-e97a96de72f4-metrics-tls") pod "dns-default-7d8x4" (UID: "cc0e7134-4a00-4ae4-9fdd-e97a96de72f4") : secret "dns-default-metrics-tls" not found Apr 16 18:03:02.979413 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:03:02.979406 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e6f71fd-65bf-41c3-b270-e42236bbe730-cert podName:5e6f71fd-65bf-41c3-b270-e42236bbe730 nodeName:}" failed. No retries permitted until 2026-04-16 18:03:10.97939337 +0000 UTC m=+48.808356481 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5e6f71fd-65bf-41c3-b270-e42236bbe730-cert") pod "ingress-canary-pzcv4" (UID: "5e6f71fd-65bf-41c3-b270-e42236bbe730") : secret "canary-serving-cert" not found Apr 16 18:03:03.009421 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:03.009353 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-9d9c8b845-7hq66"] Apr 16 18:03:03.012407 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:03.012384 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-5b4c859c74-p5rwc"] Apr 16 18:03:03.013425 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:03:03.013399 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d627cc4_1505_4932_93b6_509c446670ce.slice/crio-6f885b4a47b9c33a9c7dec4fb3de6f01917ad8352a51360a3f88c61a88af176a WatchSource:0}: Error finding container 6f885b4a47b9c33a9c7dec4fb3de6f01917ad8352a51360a3f88c61a88af176a: Status 404 returned error can't find the container with id 6f885b4a47b9c33a9c7dec4fb3de6f01917ad8352a51360a3f88c61a88af176a Apr 16 18:03:03.015873 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:03:03.015850 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05049297_92d0_44f1_a940_b679011a1fcd.slice/crio-b0c852a7b6e03d28d2d7e0a9e3709bb9bdab08a540714967a529e1f0ce4859bd WatchSource:0}: Error finding container b0c852a7b6e03d28d2d7e0a9e3709bb9bdab08a540714967a529e1f0ce4859bd: Status 404 returned error can't find the container with id b0c852a7b6e03d28d2d7e0a9e3709bb9bdab08a540714967a529e1f0ce4859bd Apr 16 18:03:03.926744 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:03.926690 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5b4c859c74-p5rwc" event={"ID":"05049297-92d0-44f1-a940-b679011a1fcd","Type":"ContainerStarted","Data":"b0c852a7b6e03d28d2d7e0a9e3709bb9bdab08a540714967a529e1f0ce4859bd"} Apr 16 18:03:03.928059 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:03.928033 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-9d9c8b845-7hq66" event={"ID":"4d627cc4-1505-4932-93b6-509c446670ce","Type":"ContainerStarted","Data":"6f885b4a47b9c33a9c7dec4fb3de6f01917ad8352a51360a3f88c61a88af176a"} Apr 16 18:03:08.938859 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:08.938821 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5b4c859c74-p5rwc" event={"ID":"05049297-92d0-44f1-a940-b679011a1fcd","Type":"ContainerStarted","Data":"668a1a42972dced5b79320a10b333a70dd6cfa05b83fe160b7c3e216a99a69bf"} Apr 16 18:03:08.939326 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:08.938986 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5b4c859c74-p5rwc" Apr 16 18:03:08.940138 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:08.940116 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-9d9c8b845-7hq66" event={"ID":"4d627cc4-1505-4932-93b6-509c446670ce","Type":"ContainerStarted","Data":"29a5ef42ba8ff4e9a91ffe5601c6777754fd5efc963adcf32a461a2f7ae9c357"} Apr 16 18:03:08.940880 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:08.940861 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5b4c859c74-p5rwc" Apr 16 18:03:08.953540 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:08.953494 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5b4c859c74-p5rwc" podStartSLOduration=1.905229325 podStartE2EDuration="6.953483243s" podCreationTimestamp="2026-04-16 18:03:02 +0000 UTC" firstStartedPulling="2026-04-16 18:03:03.017459126 +0000 UTC m=+40.846422238" lastFinishedPulling="2026-04-16 18:03:08.065713044 +0000 UTC m=+45.894676156" observedRunningTime="2026-04-16 18:03:08.952508105 +0000 UTC m=+46.781471239" watchObservedRunningTime="2026-04-16 18:03:08.953483243 +0000 UTC m=+46.782446376" Apr 16 18:03:08.984738 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:08.984697 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-9d9c8b845-7hq66" podStartSLOduration=1.940968689 podStartE2EDuration="6.984687647s" podCreationTimestamp="2026-04-16 18:03:02 +0000 UTC" firstStartedPulling="2026-04-16 18:03:03.015809739 +0000 UTC m=+40.844772851" lastFinishedPulling="2026-04-16 18:03:08.059528684 +0000 UTC m=+45.888491809" observedRunningTime="2026-04-16 18:03:08.984421306 +0000 UTC m=+46.813384439" watchObservedRunningTime="2026-04-16 18:03:08.984687647 +0000 UTC m=+46.813650779" Apr 16 18:03:11.042390 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:11.042352 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc0e7134-4a00-4ae4-9fdd-e97a96de72f4-metrics-tls\") pod \"dns-default-7d8x4\" (UID: \"cc0e7134-4a00-4ae4-9fdd-e97a96de72f4\") " pod="openshift-dns/dns-default-7d8x4" Apr 16 18:03:11.042877 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:11.042439 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e6f71fd-65bf-41c3-b270-e42236bbe730-cert\") pod \"ingress-canary-pzcv4\" (UID: \"5e6f71fd-65bf-41c3-b270-e42236bbe730\") " pod="openshift-ingress-canary/ingress-canary-pzcv4" Apr 16 18:03:11.042877 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:03:11.042480 2571 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 18:03:11.042877 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:03:11.042536 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc0e7134-4a00-4ae4-9fdd-e97a96de72f4-metrics-tls podName:cc0e7134-4a00-4ae4-9fdd-e97a96de72f4 nodeName:}" failed. No retries permitted until 2026-04-16 18:03:27.042521564 +0000 UTC m=+64.871484675 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/cc0e7134-4a00-4ae4-9fdd-e97a96de72f4-metrics-tls") pod "dns-default-7d8x4" (UID: "cc0e7134-4a00-4ae4-9fdd-e97a96de72f4") : secret "dns-default-metrics-tls" not found Apr 16 18:03:11.042877 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:03:11.042549 2571 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 18:03:11.042877 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:03:11.042632 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e6f71fd-65bf-41c3-b270-e42236bbe730-cert podName:5e6f71fd-65bf-41c3-b270-e42236bbe730 nodeName:}" failed. No retries permitted until 2026-04-16 18:03:27.042615862 +0000 UTC m=+64.871578984 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5e6f71fd-65bf-41c3-b270-e42236bbe730-cert") pod "ingress-canary-pzcv4" (UID: "5e6f71fd-65bf-41c3-b270-e42236bbe730") : secret "canary-serving-cert" not found Apr 16 18:03:21.892574 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:21.892548 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tlgs4" Apr 16 18:03:27.044910 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:27.044874 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc0e7134-4a00-4ae4-9fdd-e97a96de72f4-metrics-tls\") pod \"dns-default-7d8x4\" (UID: \"cc0e7134-4a00-4ae4-9fdd-e97a96de72f4\") " pod="openshift-dns/dns-default-7d8x4" Apr 16 18:03:27.045272 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:27.044927 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e6f71fd-65bf-41c3-b270-e42236bbe730-cert\") pod \"ingress-canary-pzcv4\" (UID: \"5e6f71fd-65bf-41c3-b270-e42236bbe730\") " pod="openshift-ingress-canary/ingress-canary-pzcv4" Apr 16 18:03:27.045272 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:03:27.045006 2571 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 18:03:27.045272 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:03:27.045016 2571 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 18:03:27.045272 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:03:27.045063 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e6f71fd-65bf-41c3-b270-e42236bbe730-cert podName:5e6f71fd-65bf-41c3-b270-e42236bbe730 nodeName:}" failed. No retries permitted until 2026-04-16 18:03:59.045047772 +0000 UTC m=+96.874010886 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5e6f71fd-65bf-41c3-b270-e42236bbe730-cert") pod "ingress-canary-pzcv4" (UID: "5e6f71fd-65bf-41c3-b270-e42236bbe730") : secret "canary-serving-cert" not found Apr 16 18:03:27.045272 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:03:27.045076 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc0e7134-4a00-4ae4-9fdd-e97a96de72f4-metrics-tls podName:cc0e7134-4a00-4ae4-9fdd-e97a96de72f4 nodeName:}" failed. No retries permitted until 2026-04-16 18:03:59.045070463 +0000 UTC m=+96.874033574 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/cc0e7134-4a00-4ae4-9fdd-e97a96de72f4-metrics-tls") pod "dns-default-7d8x4" (UID: "cc0e7134-4a00-4ae4-9fdd-e97a96de72f4") : secret "dns-default-metrics-tls" not found Apr 16 18:03:27.347380 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:27.347344 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28103df6-de37-4b7f-b3e8-6ef03a0d1cfe-metrics-certs\") pod \"network-metrics-daemon-tv8pg\" (UID: \"28103df6-de37-4b7f-b3e8-6ef03a0d1cfe\") " pod="openshift-multus/network-metrics-daemon-tv8pg" Apr 16 18:03:27.350177 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:27.350158 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 16 18:03:27.358328 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:03:27.358306 2571 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Apr 16 18:03:27.358462 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:03:27.358376 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28103df6-de37-4b7f-b3e8-6ef03a0d1cfe-metrics-certs podName:28103df6-de37-4b7f-b3e8-6ef03a0d1cfe nodeName:}" failed. No retries permitted until 2026-04-16 18:04:31.358355036 +0000 UTC m=+129.187318149 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/28103df6-de37-4b7f-b3e8-6ef03a0d1cfe-metrics-certs") pod "network-metrics-daemon-tv8pg" (UID: "28103df6-de37-4b7f-b3e8-6ef03a0d1cfe") : secret "metrics-daemon-secret" not found Apr 16 18:03:27.448511 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:27.448475 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z9rvc\" (UniqueName: \"kubernetes.io/projected/3d6007c8-6817-406b-894d-8f5fefd81911-kube-api-access-z9rvc\") pod \"network-check-target-9b2bn\" (UID: \"3d6007c8-6817-406b-894d-8f5fefd81911\") " pod="openshift-network-diagnostics/network-check-target-9b2bn" Apr 16 18:03:27.451111 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:27.451094 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 16 18:03:27.461345 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:27.461328 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 16 18:03:27.471365 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:27.471345 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9rvc\" (UniqueName: \"kubernetes.io/projected/3d6007c8-6817-406b-894d-8f5fefd81911-kube-api-access-z9rvc\") pod \"network-check-target-9b2bn\" (UID: \"3d6007c8-6817-406b-894d-8f5fefd81911\") " pod="openshift-network-diagnostics/network-check-target-9b2bn" Apr 16 18:03:27.529701 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:27.529673 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-fs95d\"" Apr 16 18:03:27.538416 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:27.538395 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-9b2bn" Apr 16 18:03:27.666464 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:27.666431 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-9b2bn"] Apr 16 18:03:27.669529 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:03:27.669491 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d6007c8_6817_406b_894d_8f5fefd81911.slice/crio-f5638bbd36186aa66a73291c8964bb1a3f4413edfe052cf46fdd42b2d6600605 WatchSource:0}: Error finding container f5638bbd36186aa66a73291c8964bb1a3f4413edfe052cf46fdd42b2d6600605: Status 404 returned error can't find the container with id f5638bbd36186aa66a73291c8964bb1a3f4413edfe052cf46fdd42b2d6600605 Apr 16 18:03:27.973800 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:27.973716 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-9b2bn" event={"ID":"3d6007c8-6817-406b-894d-8f5fefd81911","Type":"ContainerStarted","Data":"f5638bbd36186aa66a73291c8964bb1a3f4413edfe052cf46fdd42b2d6600605"} Apr 16 18:03:30.981348 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:30.981314 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-9b2bn" event={"ID":"3d6007c8-6817-406b-894d-8f5fefd81911","Type":"ContainerStarted","Data":"dfebeccf55fd42b48196b4fde62a20c14c4a875aa7d3d2179cd8d79e0222299a"} Apr 16 18:03:30.981776 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:30.981460 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-9b2bn" Apr 16 18:03:30.995931 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:30.995882 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-target-9b2bn" podStartSLOduration=66.319796451 podStartE2EDuration="1m8.995868981s" podCreationTimestamp="2026-04-16 18:02:22 +0000 UTC" firstStartedPulling="2026-04-16 18:03:27.671730097 +0000 UTC m=+65.500693208" lastFinishedPulling="2026-04-16 18:03:30.347802624 +0000 UTC m=+68.176765738" observedRunningTime="2026-04-16 18:03:30.995677957 +0000 UTC m=+68.824641083" watchObservedRunningTime="2026-04-16 18:03:30.995868981 +0000 UTC m=+68.824832114" Apr 16 18:03:59.068910 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:59.068869 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc0e7134-4a00-4ae4-9fdd-e97a96de72f4-metrics-tls\") pod \"dns-default-7d8x4\" (UID: \"cc0e7134-4a00-4ae4-9fdd-e97a96de72f4\") " pod="openshift-dns/dns-default-7d8x4" Apr 16 18:03:59.069361 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:03:59.068919 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e6f71fd-65bf-41c3-b270-e42236bbe730-cert\") pod \"ingress-canary-pzcv4\" (UID: \"5e6f71fd-65bf-41c3-b270-e42236bbe730\") " pod="openshift-ingress-canary/ingress-canary-pzcv4" Apr 16 18:03:59.069361 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:03:59.069003 2571 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 18:03:59.069361 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:03:59.069017 2571 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 18:03:59.069361 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:03:59.069074 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc0e7134-4a00-4ae4-9fdd-e97a96de72f4-metrics-tls podName:cc0e7134-4a00-4ae4-9fdd-e97a96de72f4 nodeName:}" failed. No retries permitted until 2026-04-16 18:05:03.069057218 +0000 UTC m=+160.898020329 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/cc0e7134-4a00-4ae4-9fdd-e97a96de72f4-metrics-tls") pod "dns-default-7d8x4" (UID: "cc0e7134-4a00-4ae4-9fdd-e97a96de72f4") : secret "dns-default-metrics-tls" not found Apr 16 18:03:59.069361 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:03:59.069089 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e6f71fd-65bf-41c3-b270-e42236bbe730-cert podName:5e6f71fd-65bf-41c3-b270-e42236bbe730 nodeName:}" failed. No retries permitted until 2026-04-16 18:05:03.069082049 +0000 UTC m=+160.898045160 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5e6f71fd-65bf-41c3-b270-e42236bbe730-cert") pod "ingress-canary-pzcv4" (UID: "5e6f71fd-65bf-41c3-b270-e42236bbe730") : secret "canary-serving-cert" not found Apr 16 18:04:01.985621 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:01.985574 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-9b2bn" Apr 16 18:04:31.381915 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:31.381877 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28103df6-de37-4b7f-b3e8-6ef03a0d1cfe-metrics-certs\") pod \"network-metrics-daemon-tv8pg\" (UID: \"28103df6-de37-4b7f-b3e8-6ef03a0d1cfe\") " pod="openshift-multus/network-metrics-daemon-tv8pg" Apr 16 18:04:31.382484 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:04:31.382051 2571 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Apr 16 18:04:31.382484 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:04:31.382137 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28103df6-de37-4b7f-b3e8-6ef03a0d1cfe-metrics-certs podName:28103df6-de37-4b7f-b3e8-6ef03a0d1cfe nodeName:}" failed. No retries permitted until 2026-04-16 18:06:33.38211426 +0000 UTC m=+251.211077374 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/28103df6-de37-4b7f-b3e8-6ef03a0d1cfe-metrics-certs") pod "network-metrics-daemon-tv8pg" (UID: "28103df6-de37-4b7f-b3e8-6ef03a0d1cfe") : secret "metrics-daemon-secret" not found Apr 16 18:04:39.945933 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:39.945894 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-667775844f-d4fmq"] Apr 16 18:04:39.947892 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:39.947874 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5ccd7954bb-s86dh"] Apr 16 18:04:39.948047 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:39.948024 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-d4fmq" Apr 16 18:04:39.949739 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:39.949718 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5ccd7954bb-s86dh" Apr 16 18:04:39.951730 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:39.951705 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Apr 16 18:04:39.952104 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:39.952060 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Apr 16 18:04:39.952104 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:39.952073 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-rdl9x\"" Apr 16 18:04:39.952104 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:39.952102 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Apr 16 18:04:39.952311 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:39.952061 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Apr 16 18:04:39.957633 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:39.957556 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Apr 16 18:04:39.957633 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:39.957610 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Apr 16 18:04:39.957798 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:39.957660 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Apr 16 18:04:39.957798 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:39.957703 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-2zdfd\"" Apr 16 18:04:39.957937 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:39.957923 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"default-ingress-cert\"" Apr 16 18:04:39.960069 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:39.960049 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Apr 16 18:04:39.974515 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:39.974488 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-667775844f-d4fmq"] Apr 16 18:04:39.975272 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:39.975249 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress/router-default-5ccd7954bb-s86dh"] Apr 16 18:04:40.039262 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:40.039231 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7gx6\" (UniqueName: \"kubernetes.io/projected/31f4098e-db7f-4b17-a106-9e6043d3cfe0-kube-api-access-j7gx6\") pod \"router-default-5ccd7954bb-s86dh\" (UID: \"31f4098e-db7f-4b17-a106-9e6043d3cfe0\") " pod="openshift-ingress/router-default-5ccd7954bb-s86dh" Apr 16 18:04:40.039262 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:40.039270 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjwbv\" (UniqueName: \"kubernetes.io/projected/f675b254-a9f0-44ac-90ed-c28d3c49f83e-kube-api-access-sjwbv\") pod \"cluster-samples-operator-667775844f-d4fmq\" (UID: \"f675b254-a9f0-44ac-90ed-c28d3c49f83e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-d4fmq" Apr 16 18:04:40.039473 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:40.039291 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/31f4098e-db7f-4b17-a106-9e6043d3cfe0-metrics-certs\") pod \"router-default-5ccd7954bb-s86dh\" (UID: \"31f4098e-db7f-4b17-a106-9e6043d3cfe0\") " pod="openshift-ingress/router-default-5ccd7954bb-s86dh" Apr 16 18:04:40.039473 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:40.039356 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/31f4098e-db7f-4b17-a106-9e6043d3cfe0-default-certificate\") pod \"router-default-5ccd7954bb-s86dh\" (UID: \"31f4098e-db7f-4b17-a106-9e6043d3cfe0\") " pod="openshift-ingress/router-default-5ccd7954bb-s86dh" Apr 16 18:04:40.039473 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:40.039377 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f675b254-a9f0-44ac-90ed-c28d3c49f83e-samples-operator-tls\") pod \"cluster-samples-operator-667775844f-d4fmq\" (UID: \"f675b254-a9f0-44ac-90ed-c28d3c49f83e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-d4fmq" Apr 16 18:04:40.039473 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:40.039398 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31f4098e-db7f-4b17-a106-9e6043d3cfe0-service-ca-bundle\") pod \"router-default-5ccd7954bb-s86dh\" (UID: \"31f4098e-db7f-4b17-a106-9e6043d3cfe0\") " pod="openshift-ingress/router-default-5ccd7954bb-s86dh" Apr 16 18:04:40.039473 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:40.039432 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/31f4098e-db7f-4b17-a106-9e6043d3cfe0-stats-auth\") pod \"router-default-5ccd7954bb-s86dh\" (UID: \"31f4098e-db7f-4b17-a106-9e6043d3cfe0\") " pod="openshift-ingress/router-default-5ccd7954bb-s86dh" Apr 16 18:04:40.140007 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:40.139971 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31f4098e-db7f-4b17-a106-9e6043d3cfe0-service-ca-bundle\") pod \"router-default-5ccd7954bb-s86dh\" (UID: \"31f4098e-db7f-4b17-a106-9e6043d3cfe0\") " pod="openshift-ingress/router-default-5ccd7954bb-s86dh" Apr 16 18:04:40.140216 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:40.140023 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/31f4098e-db7f-4b17-a106-9e6043d3cfe0-stats-auth\") pod \"router-default-5ccd7954bb-s86dh\" (UID: \"31f4098e-db7f-4b17-a106-9e6043d3cfe0\") " pod="openshift-ingress/router-default-5ccd7954bb-s86dh" Apr 16 18:04:40.140216 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:40.140059 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j7gx6\" (UniqueName: \"kubernetes.io/projected/31f4098e-db7f-4b17-a106-9e6043d3cfe0-kube-api-access-j7gx6\") pod \"router-default-5ccd7954bb-s86dh\" (UID: \"31f4098e-db7f-4b17-a106-9e6043d3cfe0\") " pod="openshift-ingress/router-default-5ccd7954bb-s86dh" Apr 16 18:04:40.140216 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:40.140081 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sjwbv\" (UniqueName: \"kubernetes.io/projected/f675b254-a9f0-44ac-90ed-c28d3c49f83e-kube-api-access-sjwbv\") pod \"cluster-samples-operator-667775844f-d4fmq\" (UID: \"f675b254-a9f0-44ac-90ed-c28d3c49f83e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-d4fmq" Apr 16 18:04:40.140216 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:40.140097 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/31f4098e-db7f-4b17-a106-9e6043d3cfe0-metrics-certs\") pod \"router-default-5ccd7954bb-s86dh\" (UID: \"31f4098e-db7f-4b17-a106-9e6043d3cfe0\") " pod="openshift-ingress/router-default-5ccd7954bb-s86dh" Apr 16 18:04:40.140216 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:40.140122 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/31f4098e-db7f-4b17-a106-9e6043d3cfe0-default-certificate\") pod \"router-default-5ccd7954bb-s86dh\" (UID: \"31f4098e-db7f-4b17-a106-9e6043d3cfe0\") " pod="openshift-ingress/router-default-5ccd7954bb-s86dh" Apr 16 18:04:40.140216 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:40.140143 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f675b254-a9f0-44ac-90ed-c28d3c49f83e-samples-operator-tls\") pod \"cluster-samples-operator-667775844f-d4fmq\" (UID: \"f675b254-a9f0-44ac-90ed-c28d3c49f83e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-d4fmq" Apr 16 18:04:40.140216 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:04:40.140174 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/31f4098e-db7f-4b17-a106-9e6043d3cfe0-service-ca-bundle podName:31f4098e-db7f-4b17-a106-9e6043d3cfe0 nodeName:}" failed. No retries permitted until 2026-04-16 18:04:40.64015204 +0000 UTC m=+138.469115174 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/31f4098e-db7f-4b17-a106-9e6043d3cfe0-service-ca-bundle") pod "router-default-5ccd7954bb-s86dh" (UID: "31f4098e-db7f-4b17-a106-9e6043d3cfe0") : configmap references non-existent config key: service-ca.crt Apr 16 18:04:40.140561 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:04:40.140230 2571 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 16 18:04:40.140561 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:04:40.140243 2571 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 16 18:04:40.140561 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:04:40.140296 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31f4098e-db7f-4b17-a106-9e6043d3cfe0-metrics-certs podName:31f4098e-db7f-4b17-a106-9e6043d3cfe0 nodeName:}" failed. No retries permitted until 2026-04-16 18:04:40.640278661 +0000 UTC m=+138.469241786 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/31f4098e-db7f-4b17-a106-9e6043d3cfe0-metrics-certs") pod "router-default-5ccd7954bb-s86dh" (UID: "31f4098e-db7f-4b17-a106-9e6043d3cfe0") : secret "router-metrics-certs-default" not found Apr 16 18:04:40.140561 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:04:40.140317 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f675b254-a9f0-44ac-90ed-c28d3c49f83e-samples-operator-tls podName:f675b254-a9f0-44ac-90ed-c28d3c49f83e nodeName:}" failed. No retries permitted until 2026-04-16 18:04:40.640307415 +0000 UTC m=+138.469270525 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f675b254-a9f0-44ac-90ed-c28d3c49f83e-samples-operator-tls") pod "cluster-samples-operator-667775844f-d4fmq" (UID: "f675b254-a9f0-44ac-90ed-c28d3c49f83e") : secret "samples-operator-tls" not found Apr 16 18:04:40.142549 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:40.142531 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/31f4098e-db7f-4b17-a106-9e6043d3cfe0-stats-auth\") pod \"router-default-5ccd7954bb-s86dh\" (UID: \"31f4098e-db7f-4b17-a106-9e6043d3cfe0\") " pod="openshift-ingress/router-default-5ccd7954bb-s86dh" Apr 16 18:04:40.142619 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:40.142565 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/31f4098e-db7f-4b17-a106-9e6043d3cfe0-default-certificate\") pod \"router-default-5ccd7954bb-s86dh\" (UID: \"31f4098e-db7f-4b17-a106-9e6043d3cfe0\") " pod="openshift-ingress/router-default-5ccd7954bb-s86dh" Apr 16 18:04:40.167949 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:40.167912 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7gx6\" (UniqueName: \"kubernetes.io/projected/31f4098e-db7f-4b17-a106-9e6043d3cfe0-kube-api-access-j7gx6\") pod \"router-default-5ccd7954bb-s86dh\" (UID: \"31f4098e-db7f-4b17-a106-9e6043d3cfe0\") " pod="openshift-ingress/router-default-5ccd7954bb-s86dh" Apr 16 18:04:40.177429 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:40.177399 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjwbv\" (UniqueName: \"kubernetes.io/projected/f675b254-a9f0-44ac-90ed-c28d3c49f83e-kube-api-access-sjwbv\") pod \"cluster-samples-operator-667775844f-d4fmq\" (UID: \"f675b254-a9f0-44ac-90ed-c28d3c49f83e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-d4fmq" Apr 16 18:04:40.643729 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:40.643674 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/31f4098e-db7f-4b17-a106-9e6043d3cfe0-metrics-certs\") pod \"router-default-5ccd7954bb-s86dh\" (UID: \"31f4098e-db7f-4b17-a106-9e6043d3cfe0\") " pod="openshift-ingress/router-default-5ccd7954bb-s86dh" Apr 16 18:04:40.643729 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:40.643741 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f675b254-a9f0-44ac-90ed-c28d3c49f83e-samples-operator-tls\") pod \"cluster-samples-operator-667775844f-d4fmq\" (UID: \"f675b254-a9f0-44ac-90ed-c28d3c49f83e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-d4fmq" Apr 16 18:04:40.644014 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:40.643760 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31f4098e-db7f-4b17-a106-9e6043d3cfe0-service-ca-bundle\") pod \"router-default-5ccd7954bb-s86dh\" (UID: \"31f4098e-db7f-4b17-a106-9e6043d3cfe0\") " pod="openshift-ingress/router-default-5ccd7954bb-s86dh" Apr 16 18:04:40.644014 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:04:40.643825 2571 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 16 18:04:40.644014 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:04:40.643890 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/31f4098e-db7f-4b17-a106-9e6043d3cfe0-service-ca-bundle podName:31f4098e-db7f-4b17-a106-9e6043d3cfe0 nodeName:}" failed. No retries permitted until 2026-04-16 18:04:41.643877011 +0000 UTC m=+139.472840121 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/31f4098e-db7f-4b17-a106-9e6043d3cfe0-service-ca-bundle") pod "router-default-5ccd7954bb-s86dh" (UID: "31f4098e-db7f-4b17-a106-9e6043d3cfe0") : configmap references non-existent config key: service-ca.crt Apr 16 18:04:40.644014 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:04:40.643905 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31f4098e-db7f-4b17-a106-9e6043d3cfe0-metrics-certs podName:31f4098e-db7f-4b17-a106-9e6043d3cfe0 nodeName:}" failed. No retries permitted until 2026-04-16 18:04:41.64389878 +0000 UTC m=+139.472861891 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/31f4098e-db7f-4b17-a106-9e6043d3cfe0-metrics-certs") pod "router-default-5ccd7954bb-s86dh" (UID: "31f4098e-db7f-4b17-a106-9e6043d3cfe0") : secret "router-metrics-certs-default" not found Apr 16 18:04:40.644014 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:04:40.643906 2571 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 16 18:04:40.644014 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:04:40.643958 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f675b254-a9f0-44ac-90ed-c28d3c49f83e-samples-operator-tls podName:f675b254-a9f0-44ac-90ed-c28d3c49f83e nodeName:}" failed. No retries permitted until 2026-04-16 18:04:41.643941217 +0000 UTC m=+139.472904328 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f675b254-a9f0-44ac-90ed-c28d3c49f83e-samples-operator-tls") pod "cluster-samples-operator-667775844f-d4fmq" (UID: "f675b254-a9f0-44ac-90ed-c28d3c49f83e") : secret "samples-operator-tls" not found Apr 16 18:04:41.653434 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:41.653392 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/31f4098e-db7f-4b17-a106-9e6043d3cfe0-metrics-certs\") pod \"router-default-5ccd7954bb-s86dh\" (UID: \"31f4098e-db7f-4b17-a106-9e6043d3cfe0\") " pod="openshift-ingress/router-default-5ccd7954bb-s86dh" Apr 16 18:04:41.653905 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:41.653458 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f675b254-a9f0-44ac-90ed-c28d3c49f83e-samples-operator-tls\") pod \"cluster-samples-operator-667775844f-d4fmq\" (UID: \"f675b254-a9f0-44ac-90ed-c28d3c49f83e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-d4fmq" Apr 16 18:04:41.653905 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:41.653478 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31f4098e-db7f-4b17-a106-9e6043d3cfe0-service-ca-bundle\") pod \"router-default-5ccd7954bb-s86dh\" (UID: \"31f4098e-db7f-4b17-a106-9e6043d3cfe0\") " pod="openshift-ingress/router-default-5ccd7954bb-s86dh" Apr 16 18:04:41.653905 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:04:41.653531 2571 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 16 18:04:41.653905 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:04:41.653612 2571 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 16 18:04:41.653905 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:04:41.653618 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31f4098e-db7f-4b17-a106-9e6043d3cfe0-metrics-certs podName:31f4098e-db7f-4b17-a106-9e6043d3cfe0 nodeName:}" failed. No retries permitted until 2026-04-16 18:04:43.653585097 +0000 UTC m=+141.482548208 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/31f4098e-db7f-4b17-a106-9e6043d3cfe0-metrics-certs") pod "router-default-5ccd7954bb-s86dh" (UID: "31f4098e-db7f-4b17-a106-9e6043d3cfe0") : secret "router-metrics-certs-default" not found Apr 16 18:04:41.653905 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:04:41.653660 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/31f4098e-db7f-4b17-a106-9e6043d3cfe0-service-ca-bundle podName:31f4098e-db7f-4b17-a106-9e6043d3cfe0 nodeName:}" failed. No retries permitted until 2026-04-16 18:04:43.65364848 +0000 UTC m=+141.482611594 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/31f4098e-db7f-4b17-a106-9e6043d3cfe0-service-ca-bundle") pod "router-default-5ccd7954bb-s86dh" (UID: "31f4098e-db7f-4b17-a106-9e6043d3cfe0") : configmap references non-existent config key: service-ca.crt Apr 16 18:04:41.653905 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:04:41.653671 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f675b254-a9f0-44ac-90ed-c28d3c49f83e-samples-operator-tls podName:f675b254-a9f0-44ac-90ed-c28d3c49f83e nodeName:}" failed. No retries permitted until 2026-04-16 18:04:43.653665181 +0000 UTC m=+141.482628292 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f675b254-a9f0-44ac-90ed-c28d3c49f83e-samples-operator-tls") pod "cluster-samples-operator-667775844f-d4fmq" (UID: "f675b254-a9f0-44ac-90ed-c28d3c49f83e") : secret "samples-operator-tls" not found Apr 16 18:04:43.666256 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:43.666217 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/31f4098e-db7f-4b17-a106-9e6043d3cfe0-metrics-certs\") pod \"router-default-5ccd7954bb-s86dh\" (UID: \"31f4098e-db7f-4b17-a106-9e6043d3cfe0\") " pod="openshift-ingress/router-default-5ccd7954bb-s86dh" Apr 16 18:04:43.666722 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:43.666271 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f675b254-a9f0-44ac-90ed-c28d3c49f83e-samples-operator-tls\") pod \"cluster-samples-operator-667775844f-d4fmq\" (UID: \"f675b254-a9f0-44ac-90ed-c28d3c49f83e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-d4fmq" Apr 16 18:04:43.666722 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:43.666298 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31f4098e-db7f-4b17-a106-9e6043d3cfe0-service-ca-bundle\") pod \"router-default-5ccd7954bb-s86dh\" (UID: \"31f4098e-db7f-4b17-a106-9e6043d3cfe0\") " pod="openshift-ingress/router-default-5ccd7954bb-s86dh" Apr 16 18:04:43.666722 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:04:43.666382 2571 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 16 18:04:43.666722 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:04:43.666386 2571 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 16 18:04:43.666722 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:04:43.666423 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/31f4098e-db7f-4b17-a106-9e6043d3cfe0-service-ca-bundle podName:31f4098e-db7f-4b17-a106-9e6043d3cfe0 nodeName:}" failed. No retries permitted until 2026-04-16 18:04:47.666406166 +0000 UTC m=+145.495369294 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/31f4098e-db7f-4b17-a106-9e6043d3cfe0-service-ca-bundle") pod "router-default-5ccd7954bb-s86dh" (UID: "31f4098e-db7f-4b17-a106-9e6043d3cfe0") : configmap references non-existent config key: service-ca.crt Apr 16 18:04:43.666722 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:04:43.666460 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f675b254-a9f0-44ac-90ed-c28d3c49f83e-samples-operator-tls podName:f675b254-a9f0-44ac-90ed-c28d3c49f83e nodeName:}" failed. No retries permitted until 2026-04-16 18:04:47.666449053 +0000 UTC m=+145.495412169 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f675b254-a9f0-44ac-90ed-c28d3c49f83e-samples-operator-tls") pod "cluster-samples-operator-667775844f-d4fmq" (UID: "f675b254-a9f0-44ac-90ed-c28d3c49f83e") : secret "samples-operator-tls" not found Apr 16 18:04:43.666722 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:04:43.666487 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31f4098e-db7f-4b17-a106-9e6043d3cfe0-metrics-certs podName:31f4098e-db7f-4b17-a106-9e6043d3cfe0 nodeName:}" failed. No retries permitted until 2026-04-16 18:04:47.666467426 +0000 UTC m=+145.495430540 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/31f4098e-db7f-4b17-a106-9e6043d3cfe0-metrics-certs") pod "router-default-5ccd7954bb-s86dh" (UID: "31f4098e-db7f-4b17-a106-9e6043d3cfe0") : secret "router-metrics-certs-default" not found Apr 16 18:04:47.009945 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:47.009910 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-82kk2_433f58f3-2b64-4ade-a6d2-2016a24672b3/dns-node-resolver/0.log" Apr 16 18:04:47.694964 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:47.694922 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/31f4098e-db7f-4b17-a106-9e6043d3cfe0-metrics-certs\") pod \"router-default-5ccd7954bb-s86dh\" (UID: \"31f4098e-db7f-4b17-a106-9e6043d3cfe0\") " pod="openshift-ingress/router-default-5ccd7954bb-s86dh" Apr 16 18:04:47.695149 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:47.694983 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f675b254-a9f0-44ac-90ed-c28d3c49f83e-samples-operator-tls\") pod \"cluster-samples-operator-667775844f-d4fmq\" (UID: \"f675b254-a9f0-44ac-90ed-c28d3c49f83e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-d4fmq" Apr 16 18:04:47.695149 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:47.695004 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31f4098e-db7f-4b17-a106-9e6043d3cfe0-service-ca-bundle\") pod \"router-default-5ccd7954bb-s86dh\" (UID: \"31f4098e-db7f-4b17-a106-9e6043d3cfe0\") " pod="openshift-ingress/router-default-5ccd7954bb-s86dh" Apr 16 18:04:47.695149 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:04:47.695070 2571 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 16 18:04:47.695149 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:04:47.695095 2571 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 16 18:04:47.695149 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:04:47.695102 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/31f4098e-db7f-4b17-a106-9e6043d3cfe0-service-ca-bundle podName:31f4098e-db7f-4b17-a106-9e6043d3cfe0 nodeName:}" failed. No retries permitted until 2026-04-16 18:04:55.695089311 +0000 UTC m=+153.524052421 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/31f4098e-db7f-4b17-a106-9e6043d3cfe0-service-ca-bundle") pod "router-default-5ccd7954bb-s86dh" (UID: "31f4098e-db7f-4b17-a106-9e6043d3cfe0") : configmap references non-existent config key: service-ca.crt Apr 16 18:04:47.695313 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:04:47.695168 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31f4098e-db7f-4b17-a106-9e6043d3cfe0-metrics-certs podName:31f4098e-db7f-4b17-a106-9e6043d3cfe0 nodeName:}" failed. No retries permitted until 2026-04-16 18:04:55.695152737 +0000 UTC m=+153.524115851 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/31f4098e-db7f-4b17-a106-9e6043d3cfe0-metrics-certs") pod "router-default-5ccd7954bb-s86dh" (UID: "31f4098e-db7f-4b17-a106-9e6043d3cfe0") : secret "router-metrics-certs-default" not found Apr 16 18:04:47.695313 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:04:47.695210 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f675b254-a9f0-44ac-90ed-c28d3c49f83e-samples-operator-tls podName:f675b254-a9f0-44ac-90ed-c28d3c49f83e nodeName:}" failed. No retries permitted until 2026-04-16 18:04:55.695203665 +0000 UTC m=+153.524166776 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f675b254-a9f0-44ac-90ed-c28d3c49f83e-samples-operator-tls") pod "cluster-samples-operator-667775844f-d4fmq" (UID: "f675b254-a9f0-44ac-90ed-c28d3c49f83e") : secret "samples-operator-tls" not found Apr 16 18:04:48.008457 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:48.008380 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-5pm7k_443d2e7a-08b9-4fa3-b1de-3c569b5764fd/node-ca/0.log" Apr 16 18:04:55.759098 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:55.759047 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/31f4098e-db7f-4b17-a106-9e6043d3cfe0-metrics-certs\") pod \"router-default-5ccd7954bb-s86dh\" (UID: \"31f4098e-db7f-4b17-a106-9e6043d3cfe0\") " pod="openshift-ingress/router-default-5ccd7954bb-s86dh" Apr 16 18:04:55.759098 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:55.759116 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f675b254-a9f0-44ac-90ed-c28d3c49f83e-samples-operator-tls\") pod \"cluster-samples-operator-667775844f-d4fmq\" (UID: \"f675b254-a9f0-44ac-90ed-c28d3c49f83e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-d4fmq" Apr 16 18:04:55.759677 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:55.759140 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31f4098e-db7f-4b17-a106-9e6043d3cfe0-service-ca-bundle\") pod \"router-default-5ccd7954bb-s86dh\" (UID: \"31f4098e-db7f-4b17-a106-9e6043d3cfe0\") " pod="openshift-ingress/router-default-5ccd7954bb-s86dh" Apr 16 18:04:55.759677 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:04:55.759250 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/31f4098e-db7f-4b17-a106-9e6043d3cfe0-service-ca-bundle podName:31f4098e-db7f-4b17-a106-9e6043d3cfe0 nodeName:}" failed. No retries permitted until 2026-04-16 18:05:11.759236112 +0000 UTC m=+169.588199223 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/31f4098e-db7f-4b17-a106-9e6043d3cfe0-service-ca-bundle") pod "router-default-5ccd7954bb-s86dh" (UID: "31f4098e-db7f-4b17-a106-9e6043d3cfe0") : configmap references non-existent config key: service-ca.crt Apr 16 18:04:55.761566 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:55.761529 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f675b254-a9f0-44ac-90ed-c28d3c49f83e-samples-operator-tls\") pod \"cluster-samples-operator-667775844f-d4fmq\" (UID: \"f675b254-a9f0-44ac-90ed-c28d3c49f83e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-d4fmq" Apr 16 18:04:55.761566 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:55.761559 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/31f4098e-db7f-4b17-a106-9e6043d3cfe0-metrics-certs\") pod \"router-default-5ccd7954bb-s86dh\" (UID: \"31f4098e-db7f-4b17-a106-9e6043d3cfe0\") " pod="openshift-ingress/router-default-5ccd7954bb-s86dh" Apr 16 18:04:55.857600 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:55.857542 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-d4fmq" Apr 16 18:04:55.986865 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:55.986834 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-667775844f-d4fmq"] Apr 16 18:04:56.147541 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:56.147505 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-d4fmq" event={"ID":"f675b254-a9f0-44ac-90ed-c28d3c49f83e","Type":"ContainerStarted","Data":"25c183d30ae698ca5b72baae0222dcfb5cc3ee6fe03f3753905c3ba47698dcbf"} Apr 16 18:04:58.152882 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:58.152846 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-d4fmq" event={"ID":"f675b254-a9f0-44ac-90ed-c28d3c49f83e","Type":"ContainerStarted","Data":"cbf651fe76625710cf5fbcaee9642ff2429ccae843692e77431b684a3fdc7bd0"} Apr 16 18:04:58.152882 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:58.152884 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-d4fmq" event={"ID":"f675b254-a9f0-44ac-90ed-c28d3c49f83e","Type":"ContainerStarted","Data":"de767fd0175468a89550c1ad9ebea46d931b1e63be389d255f7596af0850c145"} Apr 16 18:04:58.169423 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:58.169376 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-d4fmq" podStartSLOduration=17.708428849 podStartE2EDuration="19.169362092s" podCreationTimestamp="2026-04-16 18:04:39 +0000 UTC" firstStartedPulling="2026-04-16 18:04:56.033317585 +0000 UTC m=+153.862280696" lastFinishedPulling="2026-04-16 18:04:57.49425062 +0000 UTC m=+155.323213939" observedRunningTime="2026-04-16 18:04:58.168997877 +0000 UTC m=+155.997961006" watchObservedRunningTime="2026-04-16 18:04:58.169362092 +0000 UTC m=+155.998325225" Apr 16 18:04:58.174163 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:04:58.174141 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[metrics-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-dns/dns-default-7d8x4" podUID="cc0e7134-4a00-4ae4-9fdd-e97a96de72f4" Apr 16 18:04:58.197649 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:04:58.197610 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-ingress-canary/ingress-canary-pzcv4" podUID="5e6f71fd-65bf-41c3-b270-e42236bbe730" Apr 16 18:04:58.721786 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:04:58.721747 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[metrics-certs], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-multus/network-metrics-daemon-tv8pg" podUID="28103df6-de37-4b7f-b3e8-6ef03a0d1cfe" Apr 16 18:04:59.155278 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:04:59.155253 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-7d8x4" Apr 16 18:05:03.116702 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:03.116580 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc0e7134-4a00-4ae4-9fdd-e97a96de72f4-metrics-tls\") pod \"dns-default-7d8x4\" (UID: \"cc0e7134-4a00-4ae4-9fdd-e97a96de72f4\") " pod="openshift-dns/dns-default-7d8x4" Apr 16 18:05:03.116702 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:03.116692 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e6f71fd-65bf-41c3-b270-e42236bbe730-cert\") pod \"ingress-canary-pzcv4\" (UID: \"5e6f71fd-65bf-41c3-b270-e42236bbe730\") " pod="openshift-ingress-canary/ingress-canary-pzcv4" Apr 16 18:05:03.119008 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:03.118977 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc0e7134-4a00-4ae4-9fdd-e97a96de72f4-metrics-tls\") pod \"dns-default-7d8x4\" (UID: \"cc0e7134-4a00-4ae4-9fdd-e97a96de72f4\") " pod="openshift-dns/dns-default-7d8x4" Apr 16 18:05:03.119113 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:03.119072 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e6f71fd-65bf-41c3-b270-e42236bbe730-cert\") pod \"ingress-canary-pzcv4\" (UID: \"5e6f71fd-65bf-41c3-b270-e42236bbe730\") " pod="openshift-ingress-canary/ingress-canary-pzcv4" Apr 16 18:05:03.358821 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:03.358791 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-j5p9z\"" Apr 16 18:05:03.367149 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:03.367100 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-7d8x4" Apr 16 18:05:03.479142 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:03.479111 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-7d8x4"] Apr 16 18:05:03.482149 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:05:03.482126 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc0e7134_4a00_4ae4_9fdd_e97a96de72f4.slice/crio-446d2289df77c1b80c0908ee129589dd82d26d12dd54dd9db5c15edd10ecaaf7 WatchSource:0}: Error finding container 446d2289df77c1b80c0908ee129589dd82d26d12dd54dd9db5c15edd10ecaaf7: Status 404 returned error can't find the container with id 446d2289df77c1b80c0908ee129589dd82d26d12dd54dd9db5c15edd10ecaaf7 Apr 16 18:05:04.165035 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:04.164995 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-7d8x4" event={"ID":"cc0e7134-4a00-4ae4-9fdd-e97a96de72f4","Type":"ContainerStarted","Data":"446d2289df77c1b80c0908ee129589dd82d26d12dd54dd9db5c15edd10ecaaf7"} Apr 16 18:05:05.169986 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:05.169951 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-7d8x4" event={"ID":"cc0e7134-4a00-4ae4-9fdd-e97a96de72f4","Type":"ContainerStarted","Data":"997ff7f1566ebd721521ee8e5f297da43bd939424a1974e88ae8bda7e75194f6"} Apr 16 18:05:05.169986 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:05.169991 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-7d8x4" event={"ID":"cc0e7134-4a00-4ae4-9fdd-e97a96de72f4","Type":"ContainerStarted","Data":"9675a4c0aadc34f21a25ceaa0a3243fee3bf6bc25137482fa33346c3692679db"} Apr 16 18:05:05.170418 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:05.170092 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-7d8x4" Apr 16 18:05:05.188331 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:05.188285 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-7d8x4" podStartSLOduration=129.077541851 podStartE2EDuration="2m10.188270538s" podCreationTimestamp="2026-04-16 18:02:55 +0000 UTC" firstStartedPulling="2026-04-16 18:05:03.483905152 +0000 UTC m=+161.312868262" lastFinishedPulling="2026-04-16 18:05:04.594633833 +0000 UTC m=+162.423596949" observedRunningTime="2026-04-16 18:05:05.187709483 +0000 UTC m=+163.016672607" watchObservedRunningTime="2026-04-16 18:05:05.188270538 +0000 UTC m=+163.017233668" Apr 16 18:05:08.939585 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:08.939513 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5b4c859c74-p5rwc" podUID="05049297-92d0-44f1-a940-b679011a1fcd" containerName="acm-agent" probeResult="failure" output="Get \"http://10.134.0.8:8000/readyz\": dial tcp 10.134.0.8:8000: connect: connection refused" Apr 16 18:05:09.181145 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:09.181105 2571 generic.go:358] "Generic (PLEG): container finished" podID="05049297-92d0-44f1-a940-b679011a1fcd" containerID="668a1a42972dced5b79320a10b333a70dd6cfa05b83fe160b7c3e216a99a69bf" exitCode=1 Apr 16 18:05:09.181292 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:09.181181 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5b4c859c74-p5rwc" event={"ID":"05049297-92d0-44f1-a940-b679011a1fcd","Type":"ContainerDied","Data":"668a1a42972dced5b79320a10b333a70dd6cfa05b83fe160b7c3e216a99a69bf"} Apr 16 18:05:09.181604 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:09.181564 2571 scope.go:117] "RemoveContainer" containerID="668a1a42972dced5b79320a10b333a70dd6cfa05b83fe160b7c3e216a99a69bf" Apr 16 18:05:09.182435 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:09.182413 2571 generic.go:358] "Generic (PLEG): container finished" podID="4d627cc4-1505-4932-93b6-509c446670ce" containerID="29a5ef42ba8ff4e9a91ffe5601c6777754fd5efc963adcf32a461a2f7ae9c357" exitCode=255 Apr 16 18:05:09.182499 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:09.182456 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-9d9c8b845-7hq66" event={"ID":"4d627cc4-1505-4932-93b6-509c446670ce","Type":"ContainerDied","Data":"29a5ef42ba8ff4e9a91ffe5601c6777754fd5efc963adcf32a461a2f7ae9c357"} Apr 16 18:05:09.190360 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:09.190309 2571 scope.go:117] "RemoveContainer" containerID="29a5ef42ba8ff4e9a91ffe5601c6777754fd5efc963adcf32a461a2f7ae9c357" Apr 16 18:05:09.710090 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:09.710042 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv8pg" Apr 16 18:05:10.186717 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.186685 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5b4c859c74-p5rwc" event={"ID":"05049297-92d0-44f1-a940-b679011a1fcd","Type":"ContainerStarted","Data":"b7c0524aaf916ca1ba57c9a92608deb1a2215b3d2a2973cab95f8b389958440d"} Apr 16 18:05:10.187187 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.186945 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5b4c859c74-p5rwc" Apr 16 18:05:10.187598 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.187557 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-5b4c859c74-p5rwc" Apr 16 18:05:10.188303 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.188283 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-9d9c8b845-7hq66" event={"ID":"4d627cc4-1505-4932-93b6-509c446670ce","Type":"ContainerStarted","Data":"1f0712540393d7052c36d0efe55ed7be9d3215f60588e5e1b843fd6a4cdb8456"} Apr 16 18:05:10.260257 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.260225 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-runtime-extractor-nglgp"] Apr 16 18:05:10.262361 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.262335 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-nglgp" Apr 16 18:05:10.264577 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.264557 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-rbac-proxy\"" Apr 16 18:05:10.264696 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.264559 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-root-ca.crt\"" Apr 16 18:05:10.264696 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.264560 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-sa-dockercfg-92dpg\"" Apr 16 18:05:10.264696 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.264561 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"openshift-service-ca.crt\"" Apr 16 18:05:10.264696 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.264561 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-tls\"" Apr 16 18:05:10.272650 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.272629 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-nglgp"] Apr 16 18:05:10.371698 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.371664 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/12c773c6-f6a4-4bdf-9a43-e89bf6b599a0-crio-socket\") pod \"insights-runtime-extractor-nglgp\" (UID: \"12c773c6-f6a4-4bdf-9a43-e89bf6b599a0\") " pod="openshift-insights/insights-runtime-extractor-nglgp" Apr 16 18:05:10.371870 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.371710 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/12c773c6-f6a4-4bdf-9a43-e89bf6b599a0-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-nglgp\" (UID: \"12c773c6-f6a4-4bdf-9a43-e89bf6b599a0\") " pod="openshift-insights/insights-runtime-extractor-nglgp" Apr 16 18:05:10.371870 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.371735 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/12c773c6-f6a4-4bdf-9a43-e89bf6b599a0-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-nglgp\" (UID: \"12c773c6-f6a4-4bdf-9a43-e89bf6b599a0\") " pod="openshift-insights/insights-runtime-extractor-nglgp" Apr 16 18:05:10.371870 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.371828 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzzln\" (UniqueName: \"kubernetes.io/projected/12c773c6-f6a4-4bdf-9a43-e89bf6b599a0-kube-api-access-mzzln\") pod \"insights-runtime-extractor-nglgp\" (UID: \"12c773c6-f6a4-4bdf-9a43-e89bf6b599a0\") " pod="openshift-insights/insights-runtime-extractor-nglgp" Apr 16 18:05:10.372027 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.371885 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/12c773c6-f6a4-4bdf-9a43-e89bf6b599a0-data-volume\") pod \"insights-runtime-extractor-nglgp\" (UID: \"12c773c6-f6a4-4bdf-9a43-e89bf6b599a0\") " pod="openshift-insights/insights-runtime-extractor-nglgp" Apr 16 18:05:10.414627 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.414581 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-7c9486fc9-58hcf"] Apr 16 18:05:10.416724 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.416705 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-7c9486fc9-58hcf" Apr 16 18:05:10.422261 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.422225 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-private-configuration\"" Apr 16 18:05:10.423182 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.422811 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Apr 16 18:05:10.425403 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.425383 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Apr 16 18:05:10.425855 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.425838 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-jwfwh\"" Apr 16 18:05:10.432017 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.431993 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Apr 16 18:05:10.436981 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.436921 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-7c9486fc9-58hcf"] Apr 16 18:05:10.472846 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.472813 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/16ee3007-8734-4922-9eda-fdb2be620e47-image-registry-private-configuration\") pod \"image-registry-7c9486fc9-58hcf\" (UID: \"16ee3007-8734-4922-9eda-fdb2be620e47\") " pod="openshift-image-registry/image-registry-7c9486fc9-58hcf" Apr 16 18:05:10.472846 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.472848 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/16ee3007-8734-4922-9eda-fdb2be620e47-registry-certificates\") pod \"image-registry-7c9486fc9-58hcf\" (UID: \"16ee3007-8734-4922-9eda-fdb2be620e47\") " pod="openshift-image-registry/image-registry-7c9486fc9-58hcf" Apr 16 18:05:10.473011 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.472888 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/12c773c6-f6a4-4bdf-9a43-e89bf6b599a0-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-nglgp\" (UID: \"12c773c6-f6a4-4bdf-9a43-e89bf6b599a0\") " pod="openshift-insights/insights-runtime-extractor-nglgp" Apr 16 18:05:10.473011 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.472912 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mzzln\" (UniqueName: \"kubernetes.io/projected/12c773c6-f6a4-4bdf-9a43-e89bf6b599a0-kube-api-access-mzzln\") pod \"insights-runtime-extractor-nglgp\" (UID: \"12c773c6-f6a4-4bdf-9a43-e89bf6b599a0\") " pod="openshift-insights/insights-runtime-extractor-nglgp" Apr 16 18:05:10.473011 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.472930 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/16ee3007-8734-4922-9eda-fdb2be620e47-registry-tls\") pod \"image-registry-7c9486fc9-58hcf\" (UID: \"16ee3007-8734-4922-9eda-fdb2be620e47\") " pod="openshift-image-registry/image-registry-7c9486fc9-58hcf" Apr 16 18:05:10.473011 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.472956 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/12c773c6-f6a4-4bdf-9a43-e89bf6b599a0-data-volume\") pod \"insights-runtime-extractor-nglgp\" (UID: \"12c773c6-f6a4-4bdf-9a43-e89bf6b599a0\") " pod="openshift-insights/insights-runtime-extractor-nglgp" Apr 16 18:05:10.473011 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.472980 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16ee3007-8734-4922-9eda-fdb2be620e47-bound-sa-token\") pod \"image-registry-7c9486fc9-58hcf\" (UID: \"16ee3007-8734-4922-9eda-fdb2be620e47\") " pod="openshift-image-registry/image-registry-7c9486fc9-58hcf" Apr 16 18:05:10.473238 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.473123 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9phtl\" (UniqueName: \"kubernetes.io/projected/16ee3007-8734-4922-9eda-fdb2be620e47-kube-api-access-9phtl\") pod \"image-registry-7c9486fc9-58hcf\" (UID: \"16ee3007-8734-4922-9eda-fdb2be620e47\") " pod="openshift-image-registry/image-registry-7c9486fc9-58hcf" Apr 16 18:05:10.473238 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.473166 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/12c773c6-f6a4-4bdf-9a43-e89bf6b599a0-crio-socket\") pod \"insights-runtime-extractor-nglgp\" (UID: \"12c773c6-f6a4-4bdf-9a43-e89bf6b599a0\") " pod="openshift-insights/insights-runtime-extractor-nglgp" Apr 16 18:05:10.473238 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.473205 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/12c773c6-f6a4-4bdf-9a43-e89bf6b599a0-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-nglgp\" (UID: \"12c773c6-f6a4-4bdf-9a43-e89bf6b599a0\") " pod="openshift-insights/insights-runtime-extractor-nglgp" Apr 16 18:05:10.473238 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.473227 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/16ee3007-8734-4922-9eda-fdb2be620e47-ca-trust-extracted\") pod \"image-registry-7c9486fc9-58hcf\" (UID: \"16ee3007-8734-4922-9eda-fdb2be620e47\") " pod="openshift-image-registry/image-registry-7c9486fc9-58hcf" Apr 16 18:05:10.473370 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.473255 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16ee3007-8734-4922-9eda-fdb2be620e47-trusted-ca\") pod \"image-registry-7c9486fc9-58hcf\" (UID: \"16ee3007-8734-4922-9eda-fdb2be620e47\") " pod="openshift-image-registry/image-registry-7c9486fc9-58hcf" Apr 16 18:05:10.473370 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.473262 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/12c773c6-f6a4-4bdf-9a43-e89bf6b599a0-crio-socket\") pod \"insights-runtime-extractor-nglgp\" (UID: \"12c773c6-f6a4-4bdf-9a43-e89bf6b599a0\") " pod="openshift-insights/insights-runtime-extractor-nglgp" Apr 16 18:05:10.473370 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.473278 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/16ee3007-8734-4922-9eda-fdb2be620e47-installation-pull-secrets\") pod \"image-registry-7c9486fc9-58hcf\" (UID: \"16ee3007-8734-4922-9eda-fdb2be620e47\") " pod="openshift-image-registry/image-registry-7c9486fc9-58hcf" Apr 16 18:05:10.473834 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.473816 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/12c773c6-f6a4-4bdf-9a43-e89bf6b599a0-data-volume\") pod \"insights-runtime-extractor-nglgp\" (UID: \"12c773c6-f6a4-4bdf-9a43-e89bf6b599a0\") " pod="openshift-insights/insights-runtime-extractor-nglgp" Apr 16 18:05:10.474034 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.474017 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/12c773c6-f6a4-4bdf-9a43-e89bf6b599a0-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-nglgp\" (UID: \"12c773c6-f6a4-4bdf-9a43-e89bf6b599a0\") " pod="openshift-insights/insights-runtime-extractor-nglgp" Apr 16 18:05:10.475749 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.475718 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/12c773c6-f6a4-4bdf-9a43-e89bf6b599a0-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-nglgp\" (UID: \"12c773c6-f6a4-4bdf-9a43-e89bf6b599a0\") " pod="openshift-insights/insights-runtime-extractor-nglgp" Apr 16 18:05:10.486628 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.486605 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzzln\" (UniqueName: \"kubernetes.io/projected/12c773c6-f6a4-4bdf-9a43-e89bf6b599a0-kube-api-access-mzzln\") pod \"insights-runtime-extractor-nglgp\" (UID: \"12c773c6-f6a4-4bdf-9a43-e89bf6b599a0\") " pod="openshift-insights/insights-runtime-extractor-nglgp" Apr 16 18:05:10.571036 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.571000 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-nglgp" Apr 16 18:05:10.573920 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.573890 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/16ee3007-8734-4922-9eda-fdb2be620e47-image-registry-private-configuration\") pod \"image-registry-7c9486fc9-58hcf\" (UID: \"16ee3007-8734-4922-9eda-fdb2be620e47\") " pod="openshift-image-registry/image-registry-7c9486fc9-58hcf" Apr 16 18:05:10.574040 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.573931 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/16ee3007-8734-4922-9eda-fdb2be620e47-registry-certificates\") pod \"image-registry-7c9486fc9-58hcf\" (UID: \"16ee3007-8734-4922-9eda-fdb2be620e47\") " pod="openshift-image-registry/image-registry-7c9486fc9-58hcf" Apr 16 18:05:10.574122 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.574100 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/16ee3007-8734-4922-9eda-fdb2be620e47-registry-tls\") pod \"image-registry-7c9486fc9-58hcf\" (UID: \"16ee3007-8734-4922-9eda-fdb2be620e47\") " pod="openshift-image-registry/image-registry-7c9486fc9-58hcf" Apr 16 18:05:10.574181 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.574149 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16ee3007-8734-4922-9eda-fdb2be620e47-bound-sa-token\") pod \"image-registry-7c9486fc9-58hcf\" (UID: \"16ee3007-8734-4922-9eda-fdb2be620e47\") " pod="openshift-image-registry/image-registry-7c9486fc9-58hcf" Apr 16 18:05:10.574335 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.574309 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9phtl\" (UniqueName: \"kubernetes.io/projected/16ee3007-8734-4922-9eda-fdb2be620e47-kube-api-access-9phtl\") pod \"image-registry-7c9486fc9-58hcf\" (UID: \"16ee3007-8734-4922-9eda-fdb2be620e47\") " pod="openshift-image-registry/image-registry-7c9486fc9-58hcf" Apr 16 18:05:10.574450 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.574416 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/16ee3007-8734-4922-9eda-fdb2be620e47-ca-trust-extracted\") pod \"image-registry-7c9486fc9-58hcf\" (UID: \"16ee3007-8734-4922-9eda-fdb2be620e47\") " pod="openshift-image-registry/image-registry-7c9486fc9-58hcf" Apr 16 18:05:10.574558 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.574474 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16ee3007-8734-4922-9eda-fdb2be620e47-trusted-ca\") pod \"image-registry-7c9486fc9-58hcf\" (UID: \"16ee3007-8734-4922-9eda-fdb2be620e47\") " pod="openshift-image-registry/image-registry-7c9486fc9-58hcf" Apr 16 18:05:10.574558 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.574501 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/16ee3007-8734-4922-9eda-fdb2be620e47-installation-pull-secrets\") pod \"image-registry-7c9486fc9-58hcf\" (UID: \"16ee3007-8734-4922-9eda-fdb2be620e47\") " pod="openshift-image-registry/image-registry-7c9486fc9-58hcf" Apr 16 18:05:10.574825 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.574806 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/16ee3007-8734-4922-9eda-fdb2be620e47-registry-certificates\") pod \"image-registry-7c9486fc9-58hcf\" (UID: \"16ee3007-8734-4922-9eda-fdb2be620e47\") " pod="openshift-image-registry/image-registry-7c9486fc9-58hcf" Apr 16 18:05:10.574896 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.574806 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/16ee3007-8734-4922-9eda-fdb2be620e47-ca-trust-extracted\") pod \"image-registry-7c9486fc9-58hcf\" (UID: \"16ee3007-8734-4922-9eda-fdb2be620e47\") " pod="openshift-image-registry/image-registry-7c9486fc9-58hcf" Apr 16 18:05:10.575528 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.575504 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16ee3007-8734-4922-9eda-fdb2be620e47-trusted-ca\") pod \"image-registry-7c9486fc9-58hcf\" (UID: \"16ee3007-8734-4922-9eda-fdb2be620e47\") " pod="openshift-image-registry/image-registry-7c9486fc9-58hcf" Apr 16 18:05:10.576853 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.576832 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/16ee3007-8734-4922-9eda-fdb2be620e47-image-registry-private-configuration\") pod \"image-registry-7c9486fc9-58hcf\" (UID: \"16ee3007-8734-4922-9eda-fdb2be620e47\") " pod="openshift-image-registry/image-registry-7c9486fc9-58hcf" Apr 16 18:05:10.576979 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.576961 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/16ee3007-8734-4922-9eda-fdb2be620e47-registry-tls\") pod \"image-registry-7c9486fc9-58hcf\" (UID: \"16ee3007-8734-4922-9eda-fdb2be620e47\") " pod="openshift-image-registry/image-registry-7c9486fc9-58hcf" Apr 16 18:05:10.577038 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.577016 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/16ee3007-8734-4922-9eda-fdb2be620e47-installation-pull-secrets\") pod \"image-registry-7c9486fc9-58hcf\" (UID: \"16ee3007-8734-4922-9eda-fdb2be620e47\") " pod="openshift-image-registry/image-registry-7c9486fc9-58hcf" Apr 16 18:05:10.591819 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.591791 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16ee3007-8734-4922-9eda-fdb2be620e47-bound-sa-token\") pod \"image-registry-7c9486fc9-58hcf\" (UID: \"16ee3007-8734-4922-9eda-fdb2be620e47\") " pod="openshift-image-registry/image-registry-7c9486fc9-58hcf" Apr 16 18:05:10.591966 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.591950 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9phtl\" (UniqueName: \"kubernetes.io/projected/16ee3007-8734-4922-9eda-fdb2be620e47-kube-api-access-9phtl\") pod \"image-registry-7c9486fc9-58hcf\" (UID: \"16ee3007-8734-4922-9eda-fdb2be620e47\") " pod="openshift-image-registry/image-registry-7c9486fc9-58hcf" Apr 16 18:05:10.713404 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.713381 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-nglgp"] Apr 16 18:05:10.714936 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:05:10.714914 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12c773c6_f6a4_4bdf_9a43_e89bf6b599a0.slice/crio-309c68c82792add21c929e431ea384b5aa6e03c79ccba86b0ba026a5d1173ce7 WatchSource:0}: Error finding container 309c68c82792add21c929e431ea384b5aa6e03c79ccba86b0ba026a5d1173ce7: Status 404 returned error can't find the container with id 309c68c82792add21c929e431ea384b5aa6e03c79ccba86b0ba026a5d1173ce7 Apr 16 18:05:10.725343 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.725319 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-7c9486fc9-58hcf" Apr 16 18:05:10.857260 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:10.857230 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-7c9486fc9-58hcf"] Apr 16 18:05:10.859963 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:05:10.859938 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16ee3007_8734_4922_9eda_fdb2be620e47.slice/crio-3d7ee86f072287a9c0f3e7e17cb329170e0542a377913080654995af5e64a486 WatchSource:0}: Error finding container 3d7ee86f072287a9c0f3e7e17cb329170e0542a377913080654995af5e64a486: Status 404 returned error can't find the container with id 3d7ee86f072287a9c0f3e7e17cb329170e0542a377913080654995af5e64a486 Apr 16 18:05:11.195112 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:11.195079 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-7c9486fc9-58hcf" event={"ID":"16ee3007-8734-4922-9eda-fdb2be620e47","Type":"ContainerStarted","Data":"3bf1d7c4c19917b033bd319d9b9b7b205c8edab656b62c1c20d34757beb6cfe6"} Apr 16 18:05:11.195112 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:11.195117 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-7c9486fc9-58hcf" event={"ID":"16ee3007-8734-4922-9eda-fdb2be620e47","Type":"ContainerStarted","Data":"3d7ee86f072287a9c0f3e7e17cb329170e0542a377913080654995af5e64a486"} Apr 16 18:05:11.195606 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:11.195179 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-7c9486fc9-58hcf" Apr 16 18:05:11.196494 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:11.196471 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-nglgp" event={"ID":"12c773c6-f6a4-4bdf-9a43-e89bf6b599a0","Type":"ContainerStarted","Data":"9b3152d9b4de9d81869052a9426885f8db4d21fcd3f4f632593efbb9e7a6ce94"} Apr 16 18:05:11.196494 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:11.196495 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-nglgp" event={"ID":"12c773c6-f6a4-4bdf-9a43-e89bf6b599a0","Type":"ContainerStarted","Data":"309c68c82792add21c929e431ea384b5aa6e03c79ccba86b0ba026a5d1173ce7"} Apr 16 18:05:11.215394 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:11.215338 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-7c9486fc9-58hcf" podStartSLOduration=1.215318474 podStartE2EDuration="1.215318474s" podCreationTimestamp="2026-04-16 18:05:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:05:11.213792273 +0000 UTC m=+169.042755406" watchObservedRunningTime="2026-04-16 18:05:11.215318474 +0000 UTC m=+169.044281609" Apr 16 18:05:11.710565 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:11.710483 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pzcv4" Apr 16 18:05:11.713311 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:11.713293 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-dxq2x\"" Apr 16 18:05:11.721470 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:11.721448 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pzcv4" Apr 16 18:05:11.783843 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:11.783810 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31f4098e-db7f-4b17-a106-9e6043d3cfe0-service-ca-bundle\") pod \"router-default-5ccd7954bb-s86dh\" (UID: \"31f4098e-db7f-4b17-a106-9e6043d3cfe0\") " pod="openshift-ingress/router-default-5ccd7954bb-s86dh" Apr 16 18:05:11.784570 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:11.784546 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31f4098e-db7f-4b17-a106-9e6043d3cfe0-service-ca-bundle\") pod \"router-default-5ccd7954bb-s86dh\" (UID: \"31f4098e-db7f-4b17-a106-9e6043d3cfe0\") " pod="openshift-ingress/router-default-5ccd7954bb-s86dh" Apr 16 18:05:11.841003 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:11.840969 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-pzcv4"] Apr 16 18:05:11.844167 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:05:11.844137 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e6f71fd_65bf_41c3_b270_e42236bbe730.slice/crio-7a26dd552e9f3a8bc424c19b719c2de5e2a6cf50d5bb65880ac5f8b69996ae2c WatchSource:0}: Error finding container 7a26dd552e9f3a8bc424c19b719c2de5e2a6cf50d5bb65880ac5f8b69996ae2c: Status 404 returned error can't find the container with id 7a26dd552e9f3a8bc424c19b719c2de5e2a6cf50d5bb65880ac5f8b69996ae2c Apr 16 18:05:12.063155 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:12.063073 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5ccd7954bb-s86dh" Apr 16 18:05:12.193800 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:12.193766 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress/router-default-5ccd7954bb-s86dh"] Apr 16 18:05:12.197200 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:05:12.197171 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31f4098e_db7f_4b17_a106_9e6043d3cfe0.slice/crio-2c459bb61eb4a9f68fb55e98784f50f03e35a11a46d7d0bd299aa43c4574e46d WatchSource:0}: Error finding container 2c459bb61eb4a9f68fb55e98784f50f03e35a11a46d7d0bd299aa43c4574e46d: Status 404 returned error can't find the container with id 2c459bb61eb4a9f68fb55e98784f50f03e35a11a46d7d0bd299aa43c4574e46d Apr 16 18:05:12.200836 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:12.200806 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-nglgp" event={"ID":"12c773c6-f6a4-4bdf-9a43-e89bf6b599a0","Type":"ContainerStarted","Data":"75e0ad0ec62d170d277506059a031cd1ca5a902dcaa7cf337445df7b70c9fa57"} Apr 16 18:05:12.201861 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:12.201836 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-pzcv4" event={"ID":"5e6f71fd-65bf-41c3-b270-e42236bbe730","Type":"ContainerStarted","Data":"7a26dd552e9f3a8bc424c19b719c2de5e2a6cf50d5bb65880ac5f8b69996ae2c"} Apr 16 18:05:13.206975 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:13.206934 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-nglgp" event={"ID":"12c773c6-f6a4-4bdf-9a43-e89bf6b599a0","Type":"ContainerStarted","Data":"f3c98f405f147a4e313ea29d33528b0ad1dacb0451417ee775124aaf775f77a7"} Apr 16 18:05:13.208353 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:13.208322 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5ccd7954bb-s86dh" event={"ID":"31f4098e-db7f-4b17-a106-9e6043d3cfe0","Type":"ContainerStarted","Data":"c5c0387c33034570497bf455dc1e190decfe8c295b9d270c34986b73a20eb92b"} Apr 16 18:05:13.208469 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:13.208357 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5ccd7954bb-s86dh" event={"ID":"31f4098e-db7f-4b17-a106-9e6043d3cfe0","Type":"ContainerStarted","Data":"2c459bb61eb4a9f68fb55e98784f50f03e35a11a46d7d0bd299aa43c4574e46d"} Apr 16 18:05:13.230744 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:13.230700 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-runtime-extractor-nglgp" podStartSLOduration=1.195051248 podStartE2EDuration="3.230685524s" podCreationTimestamp="2026-04-16 18:05:10 +0000 UTC" firstStartedPulling="2026-04-16 18:05:10.785881492 +0000 UTC m=+168.614844608" lastFinishedPulling="2026-04-16 18:05:12.821515758 +0000 UTC m=+170.650478884" observedRunningTime="2026-04-16 18:05:13.229477751 +0000 UTC m=+171.058440887" watchObservedRunningTime="2026-04-16 18:05:13.230685524 +0000 UTC m=+171.059648658" Apr 16 18:05:13.246958 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:13.246911 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5ccd7954bb-s86dh" podStartSLOduration=34.246895978 podStartE2EDuration="34.246895978s" podCreationTimestamp="2026-04-16 18:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:05:13.246473281 +0000 UTC m=+171.075436425" watchObservedRunningTime="2026-04-16 18:05:13.246895978 +0000 UTC m=+171.075859112" Apr 16 18:05:14.064133 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:14.064088 2571 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5ccd7954bb-s86dh" Apr 16 18:05:14.066436 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:14.066413 2571 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5ccd7954bb-s86dh" Apr 16 18:05:14.212655 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:14.212619 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-pzcv4" event={"ID":"5e6f71fd-65bf-41c3-b270-e42236bbe730","Type":"ContainerStarted","Data":"1d0e8cb387619b507c87afb0e6b967a325a54a964d6ba52ac6da1962551aa0bc"} Apr 16 18:05:14.213116 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:14.213074 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-5ccd7954bb-s86dh" Apr 16 18:05:14.214014 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:14.213991 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5ccd7954bb-s86dh" Apr 16 18:05:14.233540 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:14.233485 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-pzcv4" podStartSLOduration=137.571092299 podStartE2EDuration="2m19.233465889s" podCreationTimestamp="2026-04-16 18:02:55 +0000 UTC" firstStartedPulling="2026-04-16 18:05:11.846667257 +0000 UTC m=+169.675630371" lastFinishedPulling="2026-04-16 18:05:13.50904085 +0000 UTC m=+171.338003961" observedRunningTime="2026-04-16 18:05:14.233287992 +0000 UTC m=+172.062251129" watchObservedRunningTime="2026-04-16 18:05:14.233465889 +0000 UTC m=+172.062429017" Apr 16 18:05:15.174760 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:15.174728 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-7d8x4" Apr 16 18:05:17.799159 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:17.799126 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-78f957474d-9p4bb"] Apr 16 18:05:17.801743 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:17.801726 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-78f957474d-9p4bb" Apr 16 18:05:17.805680 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:17.805645 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kube-root-ca.crt\"" Apr 16 18:05:17.805829 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:17.805699 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"openshift-service-ca.crt\"" Apr 16 18:05:17.805829 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:17.805729 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-dockercfg-s9lmk\"" Apr 16 18:05:17.805829 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:17.805701 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"metrics-client-ca\"" Apr 16 18:05:17.805829 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:17.805704 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-tls\"" Apr 16 18:05:17.806061 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:17.806045 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-kube-rbac-proxy-config\"" Apr 16 18:05:17.826762 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:17.826733 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8a0d988b-3278-4a48-b564-cd21c0da8eec-metrics-client-ca\") pod \"prometheus-operator-78f957474d-9p4bb\" (UID: \"8a0d988b-3278-4a48-b564-cd21c0da8eec\") " pod="openshift-monitoring/prometheus-operator-78f957474d-9p4bb" Apr 16 18:05:17.826881 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:17.826768 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8a0d988b-3278-4a48-b564-cd21c0da8eec-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-78f957474d-9p4bb\" (UID: \"8a0d988b-3278-4a48-b564-cd21c0da8eec\") " pod="openshift-monitoring/prometheus-operator-78f957474d-9p4bb" Apr 16 18:05:17.826881 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:17.826787 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmqcc\" (UniqueName: \"kubernetes.io/projected/8a0d988b-3278-4a48-b564-cd21c0da8eec-kube-api-access-bmqcc\") pod \"prometheus-operator-78f957474d-9p4bb\" (UID: \"8a0d988b-3278-4a48-b564-cd21c0da8eec\") " pod="openshift-monitoring/prometheus-operator-78f957474d-9p4bb" Apr 16 18:05:17.826881 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:17.826810 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a0d988b-3278-4a48-b564-cd21c0da8eec-prometheus-operator-tls\") pod \"prometheus-operator-78f957474d-9p4bb\" (UID: \"8a0d988b-3278-4a48-b564-cd21c0da8eec\") " pod="openshift-monitoring/prometheus-operator-78f957474d-9p4bb" Apr 16 18:05:17.829742 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:17.829720 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-78f957474d-9p4bb"] Apr 16 18:05:17.928075 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:17.928039 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bmqcc\" (UniqueName: \"kubernetes.io/projected/8a0d988b-3278-4a48-b564-cd21c0da8eec-kube-api-access-bmqcc\") pod \"prometheus-operator-78f957474d-9p4bb\" (UID: \"8a0d988b-3278-4a48-b564-cd21c0da8eec\") " pod="openshift-monitoring/prometheus-operator-78f957474d-9p4bb" Apr 16 18:05:17.928075 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:17.928083 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a0d988b-3278-4a48-b564-cd21c0da8eec-prometheus-operator-tls\") pod \"prometheus-operator-78f957474d-9p4bb\" (UID: \"8a0d988b-3278-4a48-b564-cd21c0da8eec\") " pod="openshift-monitoring/prometheus-operator-78f957474d-9p4bb" Apr 16 18:05:17.928282 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:17.928139 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8a0d988b-3278-4a48-b564-cd21c0da8eec-metrics-client-ca\") pod \"prometheus-operator-78f957474d-9p4bb\" (UID: \"8a0d988b-3278-4a48-b564-cd21c0da8eec\") " pod="openshift-monitoring/prometheus-operator-78f957474d-9p4bb" Apr 16 18:05:17.928282 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:17.928170 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8a0d988b-3278-4a48-b564-cd21c0da8eec-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-78f957474d-9p4bb\" (UID: \"8a0d988b-3278-4a48-b564-cd21c0da8eec\") " pod="openshift-monitoring/prometheus-operator-78f957474d-9p4bb" Apr 16 18:05:17.928282 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:05:17.928256 2571 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Apr 16 18:05:17.928391 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:05:17.928335 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a0d988b-3278-4a48-b564-cd21c0da8eec-prometheus-operator-tls podName:8a0d988b-3278-4a48-b564-cd21c0da8eec nodeName:}" failed. No retries permitted until 2026-04-16 18:05:18.42831793 +0000 UTC m=+176.257281042 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/8a0d988b-3278-4a48-b564-cd21c0da8eec-prometheus-operator-tls") pod "prometheus-operator-78f957474d-9p4bb" (UID: "8a0d988b-3278-4a48-b564-cd21c0da8eec") : secret "prometheus-operator-tls" not found Apr 16 18:05:17.928867 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:17.928838 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8a0d988b-3278-4a48-b564-cd21c0da8eec-metrics-client-ca\") pod \"prometheus-operator-78f957474d-9p4bb\" (UID: \"8a0d988b-3278-4a48-b564-cd21c0da8eec\") " pod="openshift-monitoring/prometheus-operator-78f957474d-9p4bb" Apr 16 18:05:17.930403 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:17.930386 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8a0d988b-3278-4a48-b564-cd21c0da8eec-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-78f957474d-9p4bb\" (UID: \"8a0d988b-3278-4a48-b564-cd21c0da8eec\") " pod="openshift-monitoring/prometheus-operator-78f957474d-9p4bb" Apr 16 18:05:17.942053 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:17.942030 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmqcc\" (UniqueName: \"kubernetes.io/projected/8a0d988b-3278-4a48-b564-cd21c0da8eec-kube-api-access-bmqcc\") pod \"prometheus-operator-78f957474d-9p4bb\" (UID: \"8a0d988b-3278-4a48-b564-cd21c0da8eec\") " pod="openshift-monitoring/prometheus-operator-78f957474d-9p4bb" Apr 16 18:05:18.431294 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:18.431241 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a0d988b-3278-4a48-b564-cd21c0da8eec-prometheus-operator-tls\") pod \"prometheus-operator-78f957474d-9p4bb\" (UID: \"8a0d988b-3278-4a48-b564-cd21c0da8eec\") " pod="openshift-monitoring/prometheus-operator-78f957474d-9p4bb" Apr 16 18:05:18.433709 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:18.433686 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a0d988b-3278-4a48-b564-cd21c0da8eec-prometheus-operator-tls\") pod \"prometheus-operator-78f957474d-9p4bb\" (UID: \"8a0d988b-3278-4a48-b564-cd21c0da8eec\") " pod="openshift-monitoring/prometheus-operator-78f957474d-9p4bb" Apr 16 18:05:18.710696 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:18.710605 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-78f957474d-9p4bb" Apr 16 18:05:18.851750 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:18.851719 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-78f957474d-9p4bb"] Apr 16 18:05:18.854759 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:05:18.854732 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a0d988b_3278_4a48_b564_cd21c0da8eec.slice/crio-d671a09dc0bee71114035f08d38f8af5f6357224392db9e51717161e78ecc479 WatchSource:0}: Error finding container d671a09dc0bee71114035f08d38f8af5f6357224392db9e51717161e78ecc479: Status 404 returned error can't find the container with id d671a09dc0bee71114035f08d38f8af5f6357224392db9e51717161e78ecc479 Apr 16 18:05:19.226281 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:19.226247 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-78f957474d-9p4bb" event={"ID":"8a0d988b-3278-4a48-b564-cd21c0da8eec","Type":"ContainerStarted","Data":"d671a09dc0bee71114035f08d38f8af5f6357224392db9e51717161e78ecc479"} Apr 16 18:05:20.230235 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:20.230197 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-78f957474d-9p4bb" event={"ID":"8a0d988b-3278-4a48-b564-cd21c0da8eec","Type":"ContainerStarted","Data":"7d26f09d9587ad6a69eb5eb7d3f264dcd4bc97030826f1fe038c5af1a97a11b0"} Apr 16 18:05:20.230235 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:20.230239 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-78f957474d-9p4bb" event={"ID":"8a0d988b-3278-4a48-b564-cd21c0da8eec","Type":"ContainerStarted","Data":"f1a5d131be11172bcaf406c0940e9c117f70b053eec53d96d82a0b4f2edf449a"} Apr 16 18:05:20.248781 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:20.248720 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-78f957474d-9p4bb" podStartSLOduration=2.052941215 podStartE2EDuration="3.248705168s" podCreationTimestamp="2026-04-16 18:05:17 +0000 UTC" firstStartedPulling="2026-04-16 18:05:18.856642939 +0000 UTC m=+176.685606050" lastFinishedPulling="2026-04-16 18:05:20.052406876 +0000 UTC m=+177.881370003" observedRunningTime="2026-04-16 18:05:20.247989354 +0000 UTC m=+178.076952488" watchObservedRunningTime="2026-04-16 18:05:20.248705168 +0000 UTC m=+178.077668310" Apr 16 18:05:22.279445 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:22.279413 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-t25v4"] Apr 16 18:05:22.281545 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:22.281522 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-t25v4" Apr 16 18:05:22.285565 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:22.285545 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-kube-rbac-proxy-config\"" Apr 16 18:05:22.285921 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:22.285906 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-dockercfg-7x7gd\"" Apr 16 18:05:22.289907 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:22.289883 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"node-exporter-accelerators-collector-config\"" Apr 16 18:05:22.291361 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:22.291340 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-tls\"" Apr 16 18:05:22.362065 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:22.362032 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/7d53f165-c7b2-4238-a1ac-17e102839778-node-exporter-textfile\") pod \"node-exporter-t25v4\" (UID: \"7d53f165-c7b2-4238-a1ac-17e102839778\") " pod="openshift-monitoring/node-exporter-t25v4" Apr 16 18:05:22.362065 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:22.362072 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7d53f165-c7b2-4238-a1ac-17e102839778-sys\") pod \"node-exporter-t25v4\" (UID: \"7d53f165-c7b2-4238-a1ac-17e102839778\") " pod="openshift-monitoring/node-exporter-t25v4" Apr 16 18:05:22.362275 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:22.362091 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/7d53f165-c7b2-4238-a1ac-17e102839778-root\") pod \"node-exporter-t25v4\" (UID: \"7d53f165-c7b2-4238-a1ac-17e102839778\") " pod="openshift-monitoring/node-exporter-t25v4" Apr 16 18:05:22.362275 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:22.362109 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/7d53f165-c7b2-4238-a1ac-17e102839778-node-exporter-accelerators-collector-config\") pod \"node-exporter-t25v4\" (UID: \"7d53f165-c7b2-4238-a1ac-17e102839778\") " pod="openshift-monitoring/node-exporter-t25v4" Apr 16 18:05:22.362275 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:22.362197 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/7d53f165-c7b2-4238-a1ac-17e102839778-node-exporter-wtmp\") pod \"node-exporter-t25v4\" (UID: \"7d53f165-c7b2-4238-a1ac-17e102839778\") " pod="openshift-monitoring/node-exporter-t25v4" Apr 16 18:05:22.362275 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:22.362240 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7d53f165-c7b2-4238-a1ac-17e102839778-metrics-client-ca\") pod \"node-exporter-t25v4\" (UID: \"7d53f165-c7b2-4238-a1ac-17e102839778\") " pod="openshift-monitoring/node-exporter-t25v4" Apr 16 18:05:22.362398 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:22.362275 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcxjh\" (UniqueName: \"kubernetes.io/projected/7d53f165-c7b2-4238-a1ac-17e102839778-kube-api-access-kcxjh\") pod \"node-exporter-t25v4\" (UID: \"7d53f165-c7b2-4238-a1ac-17e102839778\") " pod="openshift-monitoring/node-exporter-t25v4" Apr 16 18:05:22.362398 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:22.362313 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/7d53f165-c7b2-4238-a1ac-17e102839778-node-exporter-tls\") pod \"node-exporter-t25v4\" (UID: \"7d53f165-c7b2-4238-a1ac-17e102839778\") " pod="openshift-monitoring/node-exporter-t25v4" Apr 16 18:05:22.362398 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:22.362361 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7d53f165-c7b2-4238-a1ac-17e102839778-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-t25v4\" (UID: \"7d53f165-c7b2-4238-a1ac-17e102839778\") " pod="openshift-monitoring/node-exporter-t25v4" Apr 16 18:05:22.463299 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:22.463252 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7d53f165-c7b2-4238-a1ac-17e102839778-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-t25v4\" (UID: \"7d53f165-c7b2-4238-a1ac-17e102839778\") " pod="openshift-monitoring/node-exporter-t25v4" Apr 16 18:05:22.463477 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:22.463320 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/7d53f165-c7b2-4238-a1ac-17e102839778-node-exporter-textfile\") pod \"node-exporter-t25v4\" (UID: \"7d53f165-c7b2-4238-a1ac-17e102839778\") " pod="openshift-monitoring/node-exporter-t25v4" Apr 16 18:05:22.463477 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:22.463355 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7d53f165-c7b2-4238-a1ac-17e102839778-sys\") pod \"node-exporter-t25v4\" (UID: \"7d53f165-c7b2-4238-a1ac-17e102839778\") " pod="openshift-monitoring/node-exporter-t25v4" Apr 16 18:05:22.463477 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:22.463379 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/7d53f165-c7b2-4238-a1ac-17e102839778-root\") pod \"node-exporter-t25v4\" (UID: \"7d53f165-c7b2-4238-a1ac-17e102839778\") " pod="openshift-monitoring/node-exporter-t25v4" Apr 16 18:05:22.463477 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:22.463403 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/7d53f165-c7b2-4238-a1ac-17e102839778-node-exporter-accelerators-collector-config\") pod \"node-exporter-t25v4\" (UID: \"7d53f165-c7b2-4238-a1ac-17e102839778\") " pod="openshift-monitoring/node-exporter-t25v4" Apr 16 18:05:22.463477 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:22.463433 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/7d53f165-c7b2-4238-a1ac-17e102839778-node-exporter-wtmp\") pod \"node-exporter-t25v4\" (UID: \"7d53f165-c7b2-4238-a1ac-17e102839778\") " pod="openshift-monitoring/node-exporter-t25v4" Apr 16 18:05:22.463743 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:22.463463 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7d53f165-c7b2-4238-a1ac-17e102839778-sys\") pod \"node-exporter-t25v4\" (UID: \"7d53f165-c7b2-4238-a1ac-17e102839778\") " pod="openshift-monitoring/node-exporter-t25v4" Apr 16 18:05:22.463743 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:22.463468 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/7d53f165-c7b2-4238-a1ac-17e102839778-root\") pod \"node-exporter-t25v4\" (UID: \"7d53f165-c7b2-4238-a1ac-17e102839778\") " pod="openshift-monitoring/node-exporter-t25v4" Apr 16 18:05:22.463743 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:22.463502 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7d53f165-c7b2-4238-a1ac-17e102839778-metrics-client-ca\") pod \"node-exporter-t25v4\" (UID: \"7d53f165-c7b2-4238-a1ac-17e102839778\") " pod="openshift-monitoring/node-exporter-t25v4" Apr 16 18:05:22.463743 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:22.463535 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kcxjh\" (UniqueName: \"kubernetes.io/projected/7d53f165-c7b2-4238-a1ac-17e102839778-kube-api-access-kcxjh\") pod \"node-exporter-t25v4\" (UID: \"7d53f165-c7b2-4238-a1ac-17e102839778\") " pod="openshift-monitoring/node-exporter-t25v4" Apr 16 18:05:22.463743 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:22.463561 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/7d53f165-c7b2-4238-a1ac-17e102839778-node-exporter-tls\") pod \"node-exporter-t25v4\" (UID: \"7d53f165-c7b2-4238-a1ac-17e102839778\") " pod="openshift-monitoring/node-exporter-t25v4" Apr 16 18:05:22.463981 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:22.463770 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/7d53f165-c7b2-4238-a1ac-17e102839778-node-exporter-textfile\") pod \"node-exporter-t25v4\" (UID: \"7d53f165-c7b2-4238-a1ac-17e102839778\") " pod="openshift-monitoring/node-exporter-t25v4" Apr 16 18:05:22.463981 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:22.463798 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/7d53f165-c7b2-4238-a1ac-17e102839778-node-exporter-wtmp\") pod \"node-exporter-t25v4\" (UID: \"7d53f165-c7b2-4238-a1ac-17e102839778\") " pod="openshift-monitoring/node-exporter-t25v4" Apr 16 18:05:22.464084 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:22.464056 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7d53f165-c7b2-4238-a1ac-17e102839778-metrics-client-ca\") pod \"node-exporter-t25v4\" (UID: \"7d53f165-c7b2-4238-a1ac-17e102839778\") " pod="openshift-monitoring/node-exporter-t25v4" Apr 16 18:05:22.464168 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:22.464144 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/7d53f165-c7b2-4238-a1ac-17e102839778-node-exporter-accelerators-collector-config\") pod \"node-exporter-t25v4\" (UID: \"7d53f165-c7b2-4238-a1ac-17e102839778\") " pod="openshift-monitoring/node-exporter-t25v4" Apr 16 18:05:22.465805 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:22.465787 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7d53f165-c7b2-4238-a1ac-17e102839778-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-t25v4\" (UID: \"7d53f165-c7b2-4238-a1ac-17e102839778\") " pod="openshift-monitoring/node-exporter-t25v4" Apr 16 18:05:22.465805 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:22.465792 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/7d53f165-c7b2-4238-a1ac-17e102839778-node-exporter-tls\") pod \"node-exporter-t25v4\" (UID: \"7d53f165-c7b2-4238-a1ac-17e102839778\") " pod="openshift-monitoring/node-exporter-t25v4" Apr 16 18:05:22.486570 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:22.486528 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcxjh\" (UniqueName: \"kubernetes.io/projected/7d53f165-c7b2-4238-a1ac-17e102839778-kube-api-access-kcxjh\") pod \"node-exporter-t25v4\" (UID: \"7d53f165-c7b2-4238-a1ac-17e102839778\") " pod="openshift-monitoring/node-exporter-t25v4" Apr 16 18:05:22.593341 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:22.593315 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-dockercfg-7x7gd\"" Apr 16 18:05:22.600469 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:22.600446 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-t25v4" Apr 16 18:05:22.607998 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:05:22.607971 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d53f165_c7b2_4238_a1ac_17e102839778.slice/crio-0e1a74444260a6f73b3148e0262ec0be0d7d09dba248305ca15274df42681f2d WatchSource:0}: Error finding container 0e1a74444260a6f73b3148e0262ec0be0d7d09dba248305ca15274df42681f2d: Status 404 returned error can't find the container with id 0e1a74444260a6f73b3148e0262ec0be0d7d09dba248305ca15274df42681f2d Apr 16 18:05:23.238945 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:23.238923 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-t25v4" event={"ID":"7d53f165-c7b2-4238-a1ac-17e102839778","Type":"ContainerStarted","Data":"0e1a74444260a6f73b3148e0262ec0be0d7d09dba248305ca15274df42681f2d"} Apr 16 18:05:24.243310 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:24.243276 2571 generic.go:358] "Generic (PLEG): container finished" podID="7d53f165-c7b2-4238-a1ac-17e102839778" containerID="ba05d84b731af97b798f68fe3fd639e452d7f845400567f1d13df35f1a33e2c5" exitCode=0 Apr 16 18:05:24.243683 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:24.243314 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-t25v4" event={"ID":"7d53f165-c7b2-4238-a1ac-17e102839778","Type":"ContainerDied","Data":"ba05d84b731af97b798f68fe3fd639e452d7f845400567f1d13df35f1a33e2c5"} Apr 16 18:05:25.247939 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.247900 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-t25v4" event={"ID":"7d53f165-c7b2-4238-a1ac-17e102839778","Type":"ContainerStarted","Data":"8c1122169362b599174d2dd55471f85721d359f45ae407c4861f6aba2a2e30c3"} Apr 16 18:05:25.247939 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.247940 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-t25v4" event={"ID":"7d53f165-c7b2-4238-a1ac-17e102839778","Type":"ContainerStarted","Data":"6e8fcd05e91ada71d75e7c4dbf0acf351e3afa58fa000e7b6cbfc9695acea78d"} Apr 16 18:05:25.260268 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.260240 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp"] Apr 16 18:05:25.263841 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.263820 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp" Apr 16 18:05:25.267257 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.267231 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-kube-rbac-proxy-rules\"" Apr 16 18:05:25.267398 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.267294 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-kube-rbac-proxy-metrics\"" Apr 16 18:05:25.267398 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.267232 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-kube-rbac-proxy-web\"" Apr 16 18:05:25.267568 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.267553 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-dockercfg-cwkmz\"" Apr 16 18:05:25.267732 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.267715 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-kube-rbac-proxy\"" Apr 16 18:05:25.267974 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.267958 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-tls\"" Apr 16 18:05:25.268048 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.267968 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-grpc-tls-83hpjafglgpe1\"" Apr 16 18:05:25.282471 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.282447 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp"] Apr 16 18:05:25.285433 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.285386 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-t25v4" podStartSLOduration=2.663663327 podStartE2EDuration="3.285363644s" podCreationTimestamp="2026-04-16 18:05:22 +0000 UTC" firstStartedPulling="2026-04-16 18:05:22.609584631 +0000 UTC m=+180.438547746" lastFinishedPulling="2026-04-16 18:05:23.231284952 +0000 UTC m=+181.060248063" observedRunningTime="2026-04-16 18:05:25.285108061 +0000 UTC m=+183.114071194" watchObservedRunningTime="2026-04-16 18:05:25.285363644 +0000 UTC m=+183.114326791" Apr 16 18:05:25.392233 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.392198 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/025ecd6d-8672-40e5-bcff-7d12a5c16dca-secret-thanos-querier-tls\") pod \"thanos-querier-6f9ccd9bf9-sxcdp\" (UID: \"025ecd6d-8672-40e5-bcff-7d12a5c16dca\") " pod="openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp" Apr 16 18:05:25.392397 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.392258 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/025ecd6d-8672-40e5-bcff-7d12a5c16dca-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6f9ccd9bf9-sxcdp\" (UID: \"025ecd6d-8672-40e5-bcff-7d12a5c16dca\") " pod="openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp" Apr 16 18:05:25.392397 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.392285 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/025ecd6d-8672-40e5-bcff-7d12a5c16dca-secret-grpc-tls\") pod \"thanos-querier-6f9ccd9bf9-sxcdp\" (UID: \"025ecd6d-8672-40e5-bcff-7d12a5c16dca\") " pod="openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp" Apr 16 18:05:25.392397 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.392326 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/025ecd6d-8672-40e5-bcff-7d12a5c16dca-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6f9ccd9bf9-sxcdp\" (UID: \"025ecd6d-8672-40e5-bcff-7d12a5c16dca\") " pod="openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp" Apr 16 18:05:25.392397 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.392381 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/025ecd6d-8672-40e5-bcff-7d12a5c16dca-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6f9ccd9bf9-sxcdp\" (UID: \"025ecd6d-8672-40e5-bcff-7d12a5c16dca\") " pod="openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp" Apr 16 18:05:25.392542 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.392424 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/025ecd6d-8672-40e5-bcff-7d12a5c16dca-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6f9ccd9bf9-sxcdp\" (UID: \"025ecd6d-8672-40e5-bcff-7d12a5c16dca\") " pod="openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp" Apr 16 18:05:25.392542 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.392446 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/025ecd6d-8672-40e5-bcff-7d12a5c16dca-metrics-client-ca\") pod \"thanos-querier-6f9ccd9bf9-sxcdp\" (UID: \"025ecd6d-8672-40e5-bcff-7d12a5c16dca\") " pod="openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp" Apr 16 18:05:25.392542 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.392464 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-549p8\" (UniqueName: \"kubernetes.io/projected/025ecd6d-8672-40e5-bcff-7d12a5c16dca-kube-api-access-549p8\") pod \"thanos-querier-6f9ccd9bf9-sxcdp\" (UID: \"025ecd6d-8672-40e5-bcff-7d12a5c16dca\") " pod="openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp" Apr 16 18:05:25.493575 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.493538 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/025ecd6d-8672-40e5-bcff-7d12a5c16dca-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6f9ccd9bf9-sxcdp\" (UID: \"025ecd6d-8672-40e5-bcff-7d12a5c16dca\") " pod="openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp" Apr 16 18:05:25.493575 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.493575 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/025ecd6d-8672-40e5-bcff-7d12a5c16dca-secret-grpc-tls\") pod \"thanos-querier-6f9ccd9bf9-sxcdp\" (UID: \"025ecd6d-8672-40e5-bcff-7d12a5c16dca\") " pod="openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp" Apr 16 18:05:25.493780 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.493708 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/025ecd6d-8672-40e5-bcff-7d12a5c16dca-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6f9ccd9bf9-sxcdp\" (UID: \"025ecd6d-8672-40e5-bcff-7d12a5c16dca\") " pod="openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp" Apr 16 18:05:25.493780 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.493738 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/025ecd6d-8672-40e5-bcff-7d12a5c16dca-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6f9ccd9bf9-sxcdp\" (UID: \"025ecd6d-8672-40e5-bcff-7d12a5c16dca\") " pod="openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp" Apr 16 18:05:25.493780 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.493764 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/025ecd6d-8672-40e5-bcff-7d12a5c16dca-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6f9ccd9bf9-sxcdp\" (UID: \"025ecd6d-8672-40e5-bcff-7d12a5c16dca\") " pod="openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp" Apr 16 18:05:25.493889 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.493794 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/025ecd6d-8672-40e5-bcff-7d12a5c16dca-metrics-client-ca\") pod \"thanos-querier-6f9ccd9bf9-sxcdp\" (UID: \"025ecd6d-8672-40e5-bcff-7d12a5c16dca\") " pod="openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp" Apr 16 18:05:25.493889 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.493866 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-549p8\" (UniqueName: \"kubernetes.io/projected/025ecd6d-8672-40e5-bcff-7d12a5c16dca-kube-api-access-549p8\") pod \"thanos-querier-6f9ccd9bf9-sxcdp\" (UID: \"025ecd6d-8672-40e5-bcff-7d12a5c16dca\") " pod="openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp" Apr 16 18:05:25.493974 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.493907 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/025ecd6d-8672-40e5-bcff-7d12a5c16dca-secret-thanos-querier-tls\") pod \"thanos-querier-6f9ccd9bf9-sxcdp\" (UID: \"025ecd6d-8672-40e5-bcff-7d12a5c16dca\") " pod="openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp" Apr 16 18:05:25.494817 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.494754 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/025ecd6d-8672-40e5-bcff-7d12a5c16dca-metrics-client-ca\") pod \"thanos-querier-6f9ccd9bf9-sxcdp\" (UID: \"025ecd6d-8672-40e5-bcff-7d12a5c16dca\") " pod="openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp" Apr 16 18:05:25.496287 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.496229 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/025ecd6d-8672-40e5-bcff-7d12a5c16dca-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6f9ccd9bf9-sxcdp\" (UID: \"025ecd6d-8672-40e5-bcff-7d12a5c16dca\") " pod="openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp" Apr 16 18:05:25.496480 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.496456 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/025ecd6d-8672-40e5-bcff-7d12a5c16dca-secret-grpc-tls\") pod \"thanos-querier-6f9ccd9bf9-sxcdp\" (UID: \"025ecd6d-8672-40e5-bcff-7d12a5c16dca\") " pod="openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp" Apr 16 18:05:25.496568 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.496544 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/025ecd6d-8672-40e5-bcff-7d12a5c16dca-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6f9ccd9bf9-sxcdp\" (UID: \"025ecd6d-8672-40e5-bcff-7d12a5c16dca\") " pod="openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp" Apr 16 18:05:25.496662 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.496630 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/025ecd6d-8672-40e5-bcff-7d12a5c16dca-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6f9ccd9bf9-sxcdp\" (UID: \"025ecd6d-8672-40e5-bcff-7d12a5c16dca\") " pod="openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp" Apr 16 18:05:25.496746 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.496729 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/025ecd6d-8672-40e5-bcff-7d12a5c16dca-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6f9ccd9bf9-sxcdp\" (UID: \"025ecd6d-8672-40e5-bcff-7d12a5c16dca\") " pod="openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp" Apr 16 18:05:25.496838 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.496820 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/025ecd6d-8672-40e5-bcff-7d12a5c16dca-secret-thanos-querier-tls\") pod \"thanos-querier-6f9ccd9bf9-sxcdp\" (UID: \"025ecd6d-8672-40e5-bcff-7d12a5c16dca\") " pod="openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp" Apr 16 18:05:25.502094 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.502042 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-549p8\" (UniqueName: \"kubernetes.io/projected/025ecd6d-8672-40e5-bcff-7d12a5c16dca-kube-api-access-549p8\") pod \"thanos-querier-6f9ccd9bf9-sxcdp\" (UID: \"025ecd6d-8672-40e5-bcff-7d12a5c16dca\") " pod="openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp" Apr 16 18:05:25.572917 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.572881 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp" Apr 16 18:05:25.704356 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:25.703316 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp"] Apr 16 18:05:25.706685 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:05:25.706655 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod025ecd6d_8672_40e5_bcff_7d12a5c16dca.slice/crio-4a7983385728c29ea4414e78b07318e511a750c37c14404178b02b807c197369 WatchSource:0}: Error finding container 4a7983385728c29ea4414e78b07318e511a750c37c14404178b02b807c197369: Status 404 returned error can't find the container with id 4a7983385728c29ea4414e78b07318e511a750c37c14404178b02b807c197369 Apr 16 18:05:26.251601 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:26.251548 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp" event={"ID":"025ecd6d-8672-40e5-bcff-7d12a5c16dca","Type":"ContainerStarted","Data":"4a7983385728c29ea4414e78b07318e511a750c37c14404178b02b807c197369"} Apr 16 18:05:28.261436 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:28.261399 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp" event={"ID":"025ecd6d-8672-40e5-bcff-7d12a5c16dca","Type":"ContainerStarted","Data":"e89e067df3dd0639af6cc162dd66a4954b1f3c14f69ede9886870bdcabdc3f56"} Apr 16 18:05:28.261436 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:28.261441 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp" event={"ID":"025ecd6d-8672-40e5-bcff-7d12a5c16dca","Type":"ContainerStarted","Data":"baa6c453ddf100096196e115f5bd96b734f4831a170ddbf6ea79df0d771b94d3"} Apr 16 18:05:28.261974 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:28.261454 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp" event={"ID":"025ecd6d-8672-40e5-bcff-7d12a5c16dca","Type":"ContainerStarted","Data":"14f9fc56c2e8ac91a80b0340dac588363b4bd23b591b178a00e17cea15b8ee54"} Apr 16 18:05:29.265927 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:29.265891 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp" event={"ID":"025ecd6d-8672-40e5-bcff-7d12a5c16dca","Type":"ContainerStarted","Data":"9eceb5ef42de80b043c6adcef68fa89366f814d5e4c1ed403444d6d6b9698fee"} Apr 16 18:05:29.265927 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:29.265931 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp" event={"ID":"025ecd6d-8672-40e5-bcff-7d12a5c16dca","Type":"ContainerStarted","Data":"d10d4220f3db9b96ee24185ed0a77cdbb5fe6c9ccc1207721c5d68a0b91804f2"} Apr 16 18:05:29.266348 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:29.265940 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp" event={"ID":"025ecd6d-8672-40e5-bcff-7d12a5c16dca","Type":"ContainerStarted","Data":"037863539e2f145d0fbc4087a11aedad80c1ff67a28e283b7079255bb5c0b27c"} Apr 16 18:05:29.266348 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:29.266059 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp" Apr 16 18:05:29.287420 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:29.287374 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp" podStartSLOduration=1.556426934 podStartE2EDuration="4.287359202s" podCreationTimestamp="2026-04-16 18:05:25 +0000 UTC" firstStartedPulling="2026-04-16 18:05:25.708713278 +0000 UTC m=+183.537676388" lastFinishedPulling="2026-04-16 18:05:28.439645541 +0000 UTC m=+186.268608656" observedRunningTime="2026-04-16 18:05:29.286476201 +0000 UTC m=+187.115439348" watchObservedRunningTime="2026-04-16 18:05:29.287359202 +0000 UTC m=+187.116322363" Apr 16 18:05:32.207281 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:32.207248 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-7c9486fc9-58hcf" Apr 16 18:05:35.274924 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:05:35.274897 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-6f9ccd9bf9-sxcdp" Apr 16 18:06:33.431147 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:06:33.431068 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28103df6-de37-4b7f-b3e8-6ef03a0d1cfe-metrics-certs\") pod \"network-metrics-daemon-tv8pg\" (UID: \"28103df6-de37-4b7f-b3e8-6ef03a0d1cfe\") " pod="openshift-multus/network-metrics-daemon-tv8pg" Apr 16 18:06:33.433322 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:06:33.433301 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28103df6-de37-4b7f-b3e8-6ef03a0d1cfe-metrics-certs\") pod \"network-metrics-daemon-tv8pg\" (UID: \"28103df6-de37-4b7f-b3e8-6ef03a0d1cfe\") " pod="openshift-multus/network-metrics-daemon-tv8pg" Apr 16 18:06:33.713402 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:06:33.713330 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-jt7dn\"" Apr 16 18:06:33.721641 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:06:33.721616 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv8pg" Apr 16 18:06:34.043498 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:06:34.043471 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-tv8pg"] Apr 16 18:06:34.045929 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:06:34.045887 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28103df6_de37_4b7f_b3e8_6ef03a0d1cfe.slice/crio-d716b4f8c80d25482f88e3a6f25df9ffe9c90d892a6ea62d3135b0401c0f82ba WatchSource:0}: Error finding container d716b4f8c80d25482f88e3a6f25df9ffe9c90d892a6ea62d3135b0401c0f82ba: Status 404 returned error can't find the container with id d716b4f8c80d25482f88e3a6f25df9ffe9c90d892a6ea62d3135b0401c0f82ba Apr 16 18:06:34.433308 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:06:34.433273 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tv8pg" event={"ID":"28103df6-de37-4b7f-b3e8-6ef03a0d1cfe","Type":"ContainerStarted","Data":"d716b4f8c80d25482f88e3a6f25df9ffe9c90d892a6ea62d3135b0401c0f82ba"} Apr 16 18:06:35.437637 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:06:35.437581 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tv8pg" event={"ID":"28103df6-de37-4b7f-b3e8-6ef03a0d1cfe","Type":"ContainerStarted","Data":"1ba8eb4d42802fa09d6876a604a33f50cd20d1473cc2ed98357b06cb4937def5"} Apr 16 18:06:35.437637 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:06:35.437636 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tv8pg" event={"ID":"28103df6-de37-4b7f-b3e8-6ef03a0d1cfe","Type":"ContainerStarted","Data":"c657ec0733e452ccbf8e9ba7668105f71a3db4e2c8c09b723a97a2c3dd544d56"} Apr 16 18:06:35.457734 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:06:35.457682 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-tv8pg" podStartSLOduration=252.455597401 podStartE2EDuration="4m13.457666468s" podCreationTimestamp="2026-04-16 18:02:22 +0000 UTC" firstStartedPulling="2026-04-16 18:06:34.047635859 +0000 UTC m=+251.876598969" lastFinishedPulling="2026-04-16 18:06:35.049704921 +0000 UTC m=+252.878668036" observedRunningTime="2026-04-16 18:06:35.456359408 +0000 UTC m=+253.285322541" watchObservedRunningTime="2026-04-16 18:06:35.457666468 +0000 UTC m=+253.286629637" Apr 16 18:07:22.593375 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:07:22.593346 2571 kubelet.go:1628] "Image garbage collection succeeded" Apr 16 18:11:39.452885 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:11:39.452846 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kube-system/global-pull-secret-syncer-h4qv7"] Apr 16 18:11:39.454973 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:11:39.454956 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-h4qv7" Apr 16 18:11:39.462707 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:11:39.462689 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"original-pull-secret\"" Apr 16 18:11:39.473432 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:11:39.473408 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-h4qv7"] Apr 16 18:11:39.571704 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:11:39.571665 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/263d1823-fcd5-4e8d-a12b-f7850e915d71-dbus\") pod \"global-pull-secret-syncer-h4qv7\" (UID: \"263d1823-fcd5-4e8d-a12b-f7850e915d71\") " pod="kube-system/global-pull-secret-syncer-h4qv7" Apr 16 18:11:39.571704 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:11:39.571705 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/263d1823-fcd5-4e8d-a12b-f7850e915d71-original-pull-secret\") pod \"global-pull-secret-syncer-h4qv7\" (UID: \"263d1823-fcd5-4e8d-a12b-f7850e915d71\") " pod="kube-system/global-pull-secret-syncer-h4qv7" Apr 16 18:11:39.571911 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:11:39.571730 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/263d1823-fcd5-4e8d-a12b-f7850e915d71-kubelet-config\") pod \"global-pull-secret-syncer-h4qv7\" (UID: \"263d1823-fcd5-4e8d-a12b-f7850e915d71\") " pod="kube-system/global-pull-secret-syncer-h4qv7" Apr 16 18:11:39.672630 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:11:39.672580 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/263d1823-fcd5-4e8d-a12b-f7850e915d71-dbus\") pod \"global-pull-secret-syncer-h4qv7\" (UID: \"263d1823-fcd5-4e8d-a12b-f7850e915d71\") " pod="kube-system/global-pull-secret-syncer-h4qv7" Apr 16 18:11:39.672722 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:11:39.672636 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/263d1823-fcd5-4e8d-a12b-f7850e915d71-original-pull-secret\") pod \"global-pull-secret-syncer-h4qv7\" (UID: \"263d1823-fcd5-4e8d-a12b-f7850e915d71\") " pod="kube-system/global-pull-secret-syncer-h4qv7" Apr 16 18:11:39.672722 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:11:39.672661 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/263d1823-fcd5-4e8d-a12b-f7850e915d71-kubelet-config\") pod \"global-pull-secret-syncer-h4qv7\" (UID: \"263d1823-fcd5-4e8d-a12b-f7850e915d71\") " pod="kube-system/global-pull-secret-syncer-h4qv7" Apr 16 18:11:39.672787 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:11:39.672737 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/263d1823-fcd5-4e8d-a12b-f7850e915d71-kubelet-config\") pod \"global-pull-secret-syncer-h4qv7\" (UID: \"263d1823-fcd5-4e8d-a12b-f7850e915d71\") " pod="kube-system/global-pull-secret-syncer-h4qv7" Apr 16 18:11:39.672823 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:11:39.672780 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/263d1823-fcd5-4e8d-a12b-f7850e915d71-dbus\") pod \"global-pull-secret-syncer-h4qv7\" (UID: \"263d1823-fcd5-4e8d-a12b-f7850e915d71\") " pod="kube-system/global-pull-secret-syncer-h4qv7" Apr 16 18:11:39.674899 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:11:39.674883 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/263d1823-fcd5-4e8d-a12b-f7850e915d71-original-pull-secret\") pod \"global-pull-secret-syncer-h4qv7\" (UID: \"263d1823-fcd5-4e8d-a12b-f7850e915d71\") " pod="kube-system/global-pull-secret-syncer-h4qv7" Apr 16 18:11:39.763332 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:11:39.763243 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-h4qv7" Apr 16 18:11:39.879322 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:11:39.879287 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-h4qv7"] Apr 16 18:11:39.882021 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:11:39.881994 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod263d1823_fcd5_4e8d_a12b_f7850e915d71.slice/crio-8ac8d0f6e736d6d7dffb1774cfbabd8c06bdcc81112a1c79ddd5c1b2bdc2b30a WatchSource:0}: Error finding container 8ac8d0f6e736d6d7dffb1774cfbabd8c06bdcc81112a1c79ddd5c1b2bdc2b30a: Status 404 returned error can't find the container with id 8ac8d0f6e736d6d7dffb1774cfbabd8c06bdcc81112a1c79ddd5c1b2bdc2b30a Apr 16 18:11:39.883486 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:11:39.883473 2571 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 18:11:40.232900 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:11:40.232867 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-h4qv7" event={"ID":"263d1823-fcd5-4e8d-a12b-f7850e915d71","Type":"ContainerStarted","Data":"8ac8d0f6e736d6d7dffb1774cfbabd8c06bdcc81112a1c79ddd5c1b2bdc2b30a"} Apr 16 18:11:44.244886 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:11:44.244847 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-h4qv7" event={"ID":"263d1823-fcd5-4e8d-a12b-f7850e915d71","Type":"ContainerStarted","Data":"03e98b7330a1cb9d004faeac1b00951730e77f6e8fc770565b8c40e3b8030335"} Apr 16 18:11:44.269066 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:11:44.269013 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/global-pull-secret-syncer-h4qv7" podStartSLOduration=1.89247981 podStartE2EDuration="5.268998006s" podCreationTimestamp="2026-04-16 18:11:39 +0000 UTC" firstStartedPulling="2026-04-16 18:11:39.883628205 +0000 UTC m=+557.712591315" lastFinishedPulling="2026-04-16 18:11:43.260146397 +0000 UTC m=+561.089109511" observedRunningTime="2026-04-16 18:11:44.267191495 +0000 UTC m=+562.096154627" watchObservedRunningTime="2026-04-16 18:11:44.268998006 +0000 UTC m=+562.097961138" Apr 16 18:12:32.404196 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:12:32.404112 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/keda-admission-cf49989db-bd6rs"] Apr 16 18:12:32.406724 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:12:32.406696 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-admission-cf49989db-bd6rs" Apr 16 18:12:32.409899 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:12:32.409878 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"openshift-service-ca.crt\"" Apr 16 18:12:32.410426 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:12:32.410405 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-admission-webhooks-certs\"" Apr 16 18:12:32.410520 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:12:32.410434 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"kube-root-ca.crt\"" Apr 16 18:12:32.410520 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:12:32.410491 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-operator-dockercfg-rc7dn\"" Apr 16 18:12:32.410640 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:12:32.410548 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"keda-ocp-cabundle\"" Apr 16 18:12:32.419730 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:12:32.419708 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-admission-cf49989db-bd6rs"] Apr 16 18:12:32.466155 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:12:32.466121 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/41925e2d-10cb-4dcb-9ce1-ea0a959ee19f-certificates\") pod \"keda-admission-cf49989db-bd6rs\" (UID: \"41925e2d-10cb-4dcb-9ce1-ea0a959ee19f\") " pod="openshift-keda/keda-admission-cf49989db-bd6rs" Apr 16 18:12:32.466318 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:12:32.466195 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdbxl\" (UniqueName: \"kubernetes.io/projected/41925e2d-10cb-4dcb-9ce1-ea0a959ee19f-kube-api-access-wdbxl\") pod \"keda-admission-cf49989db-bd6rs\" (UID: \"41925e2d-10cb-4dcb-9ce1-ea0a959ee19f\") " pod="openshift-keda/keda-admission-cf49989db-bd6rs" Apr 16 18:12:32.567106 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:12:32.567066 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wdbxl\" (UniqueName: \"kubernetes.io/projected/41925e2d-10cb-4dcb-9ce1-ea0a959ee19f-kube-api-access-wdbxl\") pod \"keda-admission-cf49989db-bd6rs\" (UID: \"41925e2d-10cb-4dcb-9ce1-ea0a959ee19f\") " pod="openshift-keda/keda-admission-cf49989db-bd6rs" Apr 16 18:12:32.567280 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:12:32.567116 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/41925e2d-10cb-4dcb-9ce1-ea0a959ee19f-certificates\") pod \"keda-admission-cf49989db-bd6rs\" (UID: \"41925e2d-10cb-4dcb-9ce1-ea0a959ee19f\") " pod="openshift-keda/keda-admission-cf49989db-bd6rs" Apr 16 18:12:32.567280 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:12:32.567225 2571 projected.go:264] Couldn't get secret openshift-keda/keda-admission-webhooks-certs: secret "keda-admission-webhooks-certs" not found Apr 16 18:12:32.567280 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:12:32.567243 2571 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-admission-cf49989db-bd6rs: secret "keda-admission-webhooks-certs" not found Apr 16 18:12:32.567376 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:12:32.567328 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/41925e2d-10cb-4dcb-9ce1-ea0a959ee19f-certificates podName:41925e2d-10cb-4dcb-9ce1-ea0a959ee19f nodeName:}" failed. No retries permitted until 2026-04-16 18:12:33.06731257 +0000 UTC m=+610.896275681 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/41925e2d-10cb-4dcb-9ce1-ea0a959ee19f-certificates") pod "keda-admission-cf49989db-bd6rs" (UID: "41925e2d-10cb-4dcb-9ce1-ea0a959ee19f") : secret "keda-admission-webhooks-certs" not found Apr 16 18:12:32.577642 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:12:32.577606 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdbxl\" (UniqueName: \"kubernetes.io/projected/41925e2d-10cb-4dcb-9ce1-ea0a959ee19f-kube-api-access-wdbxl\") pod \"keda-admission-cf49989db-bd6rs\" (UID: \"41925e2d-10cb-4dcb-9ce1-ea0a959ee19f\") " pod="openshift-keda/keda-admission-cf49989db-bd6rs" Apr 16 18:12:33.071071 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:12:33.071033 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/41925e2d-10cb-4dcb-9ce1-ea0a959ee19f-certificates\") pod \"keda-admission-cf49989db-bd6rs\" (UID: \"41925e2d-10cb-4dcb-9ce1-ea0a959ee19f\") " pod="openshift-keda/keda-admission-cf49989db-bd6rs" Apr 16 18:12:33.073759 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:12:33.073728 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/41925e2d-10cb-4dcb-9ce1-ea0a959ee19f-certificates\") pod \"keda-admission-cf49989db-bd6rs\" (UID: \"41925e2d-10cb-4dcb-9ce1-ea0a959ee19f\") " pod="openshift-keda/keda-admission-cf49989db-bd6rs" Apr 16 18:12:33.316560 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:12:33.316516 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-admission-cf49989db-bd6rs" Apr 16 18:12:33.438821 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:12:33.438750 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-admission-cf49989db-bd6rs"] Apr 16 18:12:33.441584 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:12:33.441557 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41925e2d_10cb_4dcb_9ce1_ea0a959ee19f.slice/crio-c37c3a1eefddc06195df518ab38b6f1ed3307f1c1251f4c01a3e3f7d005297d0 WatchSource:0}: Error finding container c37c3a1eefddc06195df518ab38b6f1ed3307f1c1251f4c01a3e3f7d005297d0: Status 404 returned error can't find the container with id c37c3a1eefddc06195df518ab38b6f1ed3307f1c1251f4c01a3e3f7d005297d0 Apr 16 18:12:34.374305 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:12:34.374264 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-admission-cf49989db-bd6rs" event={"ID":"41925e2d-10cb-4dcb-9ce1-ea0a959ee19f","Type":"ContainerStarted","Data":"c37c3a1eefddc06195df518ab38b6f1ed3307f1c1251f4c01a3e3f7d005297d0"} Apr 16 18:12:36.380858 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:12:36.380816 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-admission-cf49989db-bd6rs" event={"ID":"41925e2d-10cb-4dcb-9ce1-ea0a959ee19f","Type":"ContainerStarted","Data":"120b89bfd4b3ede152c3a3927d47204bc3bd44036c4c552fded4897a0d9a4702"} Apr 16 18:12:36.381237 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:12:36.380942 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-admission-cf49989db-bd6rs" Apr 16 18:12:36.396760 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:12:36.396710 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/keda-admission-cf49989db-bd6rs" podStartSLOduration=1.843328914 podStartE2EDuration="4.396695306s" podCreationTimestamp="2026-04-16 18:12:32 +0000 UTC" firstStartedPulling="2026-04-16 18:12:33.44298411 +0000 UTC m=+611.271947225" lastFinishedPulling="2026-04-16 18:12:35.996350506 +0000 UTC m=+613.825313617" observedRunningTime="2026-04-16 18:12:36.395095962 +0000 UTC m=+614.224059096" watchObservedRunningTime="2026-04-16 18:12:36.396695306 +0000 UTC m=+614.225658503" Apr 16 18:12:57.386566 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:12:57.386533 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-admission-cf49989db-bd6rs" Apr 16 18:13:44.544940 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:44.544903 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/kserve-controller-manager-659c8cbdc-vvs7b"] Apr 16 18:13:44.547912 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:44.547896 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-659c8cbdc-vvs7b" Apr 16 18:13:44.551172 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:44.551149 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"kserve-controller-manager-dockercfg-znjhv\"" Apr 16 18:13:44.551296 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:44.551177 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"kube-root-ca.crt\"" Apr 16 18:13:44.551296 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:44.551186 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"openshift-service-ca.crt\"" Apr 16 18:13:44.551296 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:44.551154 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"kserve-webhook-server-cert\"" Apr 16 18:13:44.559284 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:44.559261 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-659c8cbdc-vvs7b"] Apr 16 18:13:44.593743 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:44.593710 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krg89\" (UniqueName: \"kubernetes.io/projected/1883e66b-0a8c-478b-a2f7-f8dac57722a3-kube-api-access-krg89\") pod \"kserve-controller-manager-659c8cbdc-vvs7b\" (UID: \"1883e66b-0a8c-478b-a2f7-f8dac57722a3\") " pod="kserve/kserve-controller-manager-659c8cbdc-vvs7b" Apr 16 18:13:44.593880 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:44.593767 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1883e66b-0a8c-478b-a2f7-f8dac57722a3-cert\") pod \"kserve-controller-manager-659c8cbdc-vvs7b\" (UID: \"1883e66b-0a8c-478b-a2f7-f8dac57722a3\") " pod="kserve/kserve-controller-manager-659c8cbdc-vvs7b" Apr 16 18:13:44.618368 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:44.618342 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/seaweedfs-86cc847c5c-qs9lj"] Apr 16 18:13:44.621426 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:44.621411 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/seaweedfs-86cc847c5c-qs9lj" Apr 16 18:13:44.623610 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:44.623572 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"default-dockercfg-mpsp8\"" Apr 16 18:13:44.623718 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:44.623697 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"mlpipeline-s3-artifact\"" Apr 16 18:13:44.637389 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:44.637358 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/seaweedfs-86cc847c5c-qs9lj"] Apr 16 18:13:44.694875 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:44.694842 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/da2e5378-dcc8-4b0b-b5d0-20ef11c250c3-data\") pod \"seaweedfs-86cc847c5c-qs9lj\" (UID: \"da2e5378-dcc8-4b0b-b5d0-20ef11c250c3\") " pod="kserve/seaweedfs-86cc847c5c-qs9lj" Apr 16 18:13:44.694875 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:44.694880 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1883e66b-0a8c-478b-a2f7-f8dac57722a3-cert\") pod \"kserve-controller-manager-659c8cbdc-vvs7b\" (UID: \"1883e66b-0a8c-478b-a2f7-f8dac57722a3\") " pod="kserve/kserve-controller-manager-659c8cbdc-vvs7b" Apr 16 18:13:44.695153 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:44.694950 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29g56\" (UniqueName: \"kubernetes.io/projected/da2e5378-dcc8-4b0b-b5d0-20ef11c250c3-kube-api-access-29g56\") pod \"seaweedfs-86cc847c5c-qs9lj\" (UID: \"da2e5378-dcc8-4b0b-b5d0-20ef11c250c3\") " pod="kserve/seaweedfs-86cc847c5c-qs9lj" Apr 16 18:13:44.695153 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:44.695006 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-krg89\" (UniqueName: \"kubernetes.io/projected/1883e66b-0a8c-478b-a2f7-f8dac57722a3-kube-api-access-krg89\") pod \"kserve-controller-manager-659c8cbdc-vvs7b\" (UID: \"1883e66b-0a8c-478b-a2f7-f8dac57722a3\") " pod="kserve/kserve-controller-manager-659c8cbdc-vvs7b" Apr 16 18:13:44.697277 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:44.697247 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1883e66b-0a8c-478b-a2f7-f8dac57722a3-cert\") pod \"kserve-controller-manager-659c8cbdc-vvs7b\" (UID: \"1883e66b-0a8c-478b-a2f7-f8dac57722a3\") " pod="kserve/kserve-controller-manager-659c8cbdc-vvs7b" Apr 16 18:13:44.705829 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:44.705798 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-krg89\" (UniqueName: \"kubernetes.io/projected/1883e66b-0a8c-478b-a2f7-f8dac57722a3-kube-api-access-krg89\") pod \"kserve-controller-manager-659c8cbdc-vvs7b\" (UID: \"1883e66b-0a8c-478b-a2f7-f8dac57722a3\") " pod="kserve/kserve-controller-manager-659c8cbdc-vvs7b" Apr 16 18:13:44.796416 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:44.796313 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/da2e5378-dcc8-4b0b-b5d0-20ef11c250c3-data\") pod \"seaweedfs-86cc847c5c-qs9lj\" (UID: \"da2e5378-dcc8-4b0b-b5d0-20ef11c250c3\") " pod="kserve/seaweedfs-86cc847c5c-qs9lj" Apr 16 18:13:44.796416 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:44.796378 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-29g56\" (UniqueName: \"kubernetes.io/projected/da2e5378-dcc8-4b0b-b5d0-20ef11c250c3-kube-api-access-29g56\") pod \"seaweedfs-86cc847c5c-qs9lj\" (UID: \"da2e5378-dcc8-4b0b-b5d0-20ef11c250c3\") " pod="kserve/seaweedfs-86cc847c5c-qs9lj" Apr 16 18:13:44.796735 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:44.796715 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/da2e5378-dcc8-4b0b-b5d0-20ef11c250c3-data\") pod \"seaweedfs-86cc847c5c-qs9lj\" (UID: \"da2e5378-dcc8-4b0b-b5d0-20ef11c250c3\") " pod="kserve/seaweedfs-86cc847c5c-qs9lj" Apr 16 18:13:44.804536 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:44.804515 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-29g56\" (UniqueName: \"kubernetes.io/projected/da2e5378-dcc8-4b0b-b5d0-20ef11c250c3-kube-api-access-29g56\") pod \"seaweedfs-86cc847c5c-qs9lj\" (UID: \"da2e5378-dcc8-4b0b-b5d0-20ef11c250c3\") " pod="kserve/seaweedfs-86cc847c5c-qs9lj" Apr 16 18:13:44.858407 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:44.858375 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-659c8cbdc-vvs7b" Apr 16 18:13:44.930039 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:44.930012 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/seaweedfs-86cc847c5c-qs9lj" Apr 16 18:13:44.977787 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:44.977751 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-659c8cbdc-vvs7b"] Apr 16 18:13:44.982339 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:13:44.982297 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1883e66b_0a8c_478b_a2f7_f8dac57722a3.slice/crio-ebc8c8b12ca7e7d119a5e033d1b5bcffc2e741df76f0901d8f986cbaf494a7da WatchSource:0}: Error finding container ebc8c8b12ca7e7d119a5e033d1b5bcffc2e741df76f0901d8f986cbaf494a7da: Status 404 returned error can't find the container with id ebc8c8b12ca7e7d119a5e033d1b5bcffc2e741df76f0901d8f986cbaf494a7da Apr 16 18:13:45.053946 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:45.053873 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/seaweedfs-86cc847c5c-qs9lj"] Apr 16 18:13:45.056785 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:13:45.056759 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda2e5378_dcc8_4b0b_b5d0_20ef11c250c3.slice/crio-04d33ef63b3b5e7b0c35974a0cb9d190aefc29e6ee81e00c01ed390b5af52c6d WatchSource:0}: Error finding container 04d33ef63b3b5e7b0c35974a0cb9d190aefc29e6ee81e00c01ed390b5af52c6d: Status 404 returned error can't find the container with id 04d33ef63b3b5e7b0c35974a0cb9d190aefc29e6ee81e00c01ed390b5af52c6d Apr 16 18:13:45.567869 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:45.567833 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/seaweedfs-86cc847c5c-qs9lj" event={"ID":"da2e5378-dcc8-4b0b-b5d0-20ef11c250c3","Type":"ContainerStarted","Data":"04d33ef63b3b5e7b0c35974a0cb9d190aefc29e6ee81e00c01ed390b5af52c6d"} Apr 16 18:13:45.568981 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:45.568948 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-659c8cbdc-vvs7b" event={"ID":"1883e66b-0a8c-478b-a2f7-f8dac57722a3","Type":"ContainerStarted","Data":"ebc8c8b12ca7e7d119a5e033d1b5bcffc2e741df76f0901d8f986cbaf494a7da"} Apr 16 18:13:49.582172 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:49.582136 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/seaweedfs-86cc847c5c-qs9lj" event={"ID":"da2e5378-dcc8-4b0b-b5d0-20ef11c250c3","Type":"ContainerStarted","Data":"a11f4a3242224d51284707c5525dbdfb321c6eb4bf7680814d4dbb3d571ed74a"} Apr 16 18:13:49.582617 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:49.582214 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/seaweedfs-86cc847c5c-qs9lj" Apr 16 18:13:49.583461 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:49.583430 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-659c8cbdc-vvs7b" event={"ID":"1883e66b-0a8c-478b-a2f7-f8dac57722a3","Type":"ContainerStarted","Data":"bc7e6367c954303c8fd8cc9e861a2d831f0d2daea551d9bfa9c9bda9e87ef726"} Apr 16 18:13:49.583571 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:49.583475 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/kserve-controller-manager-659c8cbdc-vvs7b" Apr 16 18:13:49.599858 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:49.599815 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/seaweedfs-86cc847c5c-qs9lj" podStartSLOduration=1.977228829 podStartE2EDuration="5.599798615s" podCreationTimestamp="2026-04-16 18:13:44 +0000 UTC" firstStartedPulling="2026-04-16 18:13:45.058114951 +0000 UTC m=+682.887078061" lastFinishedPulling="2026-04-16 18:13:48.680684732 +0000 UTC m=+686.509647847" observedRunningTime="2026-04-16 18:13:49.598086326 +0000 UTC m=+687.427049459" watchObservedRunningTime="2026-04-16 18:13:49.599798615 +0000 UTC m=+687.428761741" Apr 16 18:13:49.623606 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:49.623540 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/kserve-controller-manager-659c8cbdc-vvs7b" podStartSLOduration=2.018773083 podStartE2EDuration="5.623525713s" podCreationTimestamp="2026-04-16 18:13:44 +0000 UTC" firstStartedPulling="2026-04-16 18:13:44.983899639 +0000 UTC m=+682.812862752" lastFinishedPulling="2026-04-16 18:13:48.588652268 +0000 UTC m=+686.417615382" observedRunningTime="2026-04-16 18:13:49.621696119 +0000 UTC m=+687.450659253" watchObservedRunningTime="2026-04-16 18:13:49.623525713 +0000 UTC m=+687.452488845" Apr 16 18:13:55.588338 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:13:55.588307 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/seaweedfs-86cc847c5c-qs9lj" Apr 16 18:14:20.151651 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:20.151578 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve/kserve-controller-manager-659c8cbdc-vvs7b"] Apr 16 18:14:20.152226 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:20.151961 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve/kserve-controller-manager-659c8cbdc-vvs7b" podUID="1883e66b-0a8c-478b-a2f7-f8dac57722a3" containerName="manager" containerID="cri-o://bc7e6367c954303c8fd8cc9e861a2d831f0d2daea551d9bfa9c9bda9e87ef726" gracePeriod=10 Apr 16 18:14:20.156740 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:20.156716 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/kserve-controller-manager-659c8cbdc-vvs7b" Apr 16 18:14:20.177739 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:20.177718 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/kserve-controller-manager-659c8cbdc-bggzs"] Apr 16 18:14:20.179852 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:20.179832 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-659c8cbdc-bggzs" Apr 16 18:14:20.190906 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:20.190881 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-659c8cbdc-bggzs"] Apr 16 18:14:20.284753 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:20.284719 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqzl2\" (UniqueName: \"kubernetes.io/projected/55c50b9d-6894-48c1-84f2-00aa20a103d1-kube-api-access-zqzl2\") pod \"kserve-controller-manager-659c8cbdc-bggzs\" (UID: \"55c50b9d-6894-48c1-84f2-00aa20a103d1\") " pod="kserve/kserve-controller-manager-659c8cbdc-bggzs" Apr 16 18:14:20.284921 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:20.284833 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/55c50b9d-6894-48c1-84f2-00aa20a103d1-cert\") pod \"kserve-controller-manager-659c8cbdc-bggzs\" (UID: \"55c50b9d-6894-48c1-84f2-00aa20a103d1\") " pod="kserve/kserve-controller-manager-659c8cbdc-bggzs" Apr 16 18:14:20.385334 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:20.385306 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zqzl2\" (UniqueName: \"kubernetes.io/projected/55c50b9d-6894-48c1-84f2-00aa20a103d1-kube-api-access-zqzl2\") pod \"kserve-controller-manager-659c8cbdc-bggzs\" (UID: \"55c50b9d-6894-48c1-84f2-00aa20a103d1\") " pod="kserve/kserve-controller-manager-659c8cbdc-bggzs" Apr 16 18:14:20.385457 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:20.385372 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/55c50b9d-6894-48c1-84f2-00aa20a103d1-cert\") pod \"kserve-controller-manager-659c8cbdc-bggzs\" (UID: \"55c50b9d-6894-48c1-84f2-00aa20a103d1\") " pod="kserve/kserve-controller-manager-659c8cbdc-bggzs" Apr 16 18:14:20.387605 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:20.387567 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-659c8cbdc-vvs7b" Apr 16 18:14:20.387744 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:20.387726 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/55c50b9d-6894-48c1-84f2-00aa20a103d1-cert\") pod \"kserve-controller-manager-659c8cbdc-bggzs\" (UID: \"55c50b9d-6894-48c1-84f2-00aa20a103d1\") " pod="kserve/kserve-controller-manager-659c8cbdc-bggzs" Apr 16 18:14:20.394689 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:20.394665 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqzl2\" (UniqueName: \"kubernetes.io/projected/55c50b9d-6894-48c1-84f2-00aa20a103d1-kube-api-access-zqzl2\") pod \"kserve-controller-manager-659c8cbdc-bggzs\" (UID: \"55c50b9d-6894-48c1-84f2-00aa20a103d1\") " pod="kserve/kserve-controller-manager-659c8cbdc-bggzs" Apr 16 18:14:20.486369 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:20.486281 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1883e66b-0a8c-478b-a2f7-f8dac57722a3-cert\") pod \"1883e66b-0a8c-478b-a2f7-f8dac57722a3\" (UID: \"1883e66b-0a8c-478b-a2f7-f8dac57722a3\") " Apr 16 18:14:20.486369 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:20.486341 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krg89\" (UniqueName: \"kubernetes.io/projected/1883e66b-0a8c-478b-a2f7-f8dac57722a3-kube-api-access-krg89\") pod \"1883e66b-0a8c-478b-a2f7-f8dac57722a3\" (UID: \"1883e66b-0a8c-478b-a2f7-f8dac57722a3\") " Apr 16 18:14:20.488494 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:20.488467 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1883e66b-0a8c-478b-a2f7-f8dac57722a3-cert" (OuterVolumeSpecName: "cert") pod "1883e66b-0a8c-478b-a2f7-f8dac57722a3" (UID: "1883e66b-0a8c-478b-a2f7-f8dac57722a3"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 18:14:20.488613 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:20.488490 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1883e66b-0a8c-478b-a2f7-f8dac57722a3-kube-api-access-krg89" (OuterVolumeSpecName: "kube-api-access-krg89") pod "1883e66b-0a8c-478b-a2f7-f8dac57722a3" (UID: "1883e66b-0a8c-478b-a2f7-f8dac57722a3"). InnerVolumeSpecName "kube-api-access-krg89". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 18:14:20.528379 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:20.528352 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-659c8cbdc-bggzs" Apr 16 18:14:20.587330 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:20.587286 2571 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1883e66b-0a8c-478b-a2f7-f8dac57722a3-cert\") on node \"ip-10-0-128-59.ec2.internal\" DevicePath \"\"" Apr 16 18:14:20.587330 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:20.587310 2571 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-krg89\" (UniqueName: \"kubernetes.io/projected/1883e66b-0a8c-478b-a2f7-f8dac57722a3-kube-api-access-krg89\") on node \"ip-10-0-128-59.ec2.internal\" DevicePath \"\"" Apr 16 18:14:20.653774 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:20.653748 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-659c8cbdc-bggzs"] Apr 16 18:14:20.655413 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:14:20.655388 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55c50b9d_6894_48c1_84f2_00aa20a103d1.slice/crio-2d16e73dd0849c36daddb64d9db3e7b445fa93ead8b7d594b80253415c2878f0 WatchSource:0}: Error finding container 2d16e73dd0849c36daddb64d9db3e7b445fa93ead8b7d594b80253415c2878f0: Status 404 returned error can't find the container with id 2d16e73dd0849c36daddb64d9db3e7b445fa93ead8b7d594b80253415c2878f0 Apr 16 18:14:20.666697 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:20.666672 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-659c8cbdc-bggzs" event={"ID":"55c50b9d-6894-48c1-84f2-00aa20a103d1","Type":"ContainerStarted","Data":"2d16e73dd0849c36daddb64d9db3e7b445fa93ead8b7d594b80253415c2878f0"} Apr 16 18:14:20.667644 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:20.667624 2571 generic.go:358] "Generic (PLEG): container finished" podID="1883e66b-0a8c-478b-a2f7-f8dac57722a3" containerID="bc7e6367c954303c8fd8cc9e861a2d831f0d2daea551d9bfa9c9bda9e87ef726" exitCode=0 Apr 16 18:14:20.667731 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:20.667684 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-659c8cbdc-vvs7b" Apr 16 18:14:20.667731 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:20.667704 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-659c8cbdc-vvs7b" event={"ID":"1883e66b-0a8c-478b-a2f7-f8dac57722a3","Type":"ContainerDied","Data":"bc7e6367c954303c8fd8cc9e861a2d831f0d2daea551d9bfa9c9bda9e87ef726"} Apr 16 18:14:20.667817 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:20.667736 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-659c8cbdc-vvs7b" event={"ID":"1883e66b-0a8c-478b-a2f7-f8dac57722a3","Type":"ContainerDied","Data":"ebc8c8b12ca7e7d119a5e033d1b5bcffc2e741df76f0901d8f986cbaf494a7da"} Apr 16 18:14:20.667817 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:20.667751 2571 scope.go:117] "RemoveContainer" containerID="bc7e6367c954303c8fd8cc9e861a2d831f0d2daea551d9bfa9c9bda9e87ef726" Apr 16 18:14:20.676093 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:20.675981 2571 scope.go:117] "RemoveContainer" containerID="bc7e6367c954303c8fd8cc9e861a2d831f0d2daea551d9bfa9c9bda9e87ef726" Apr 16 18:14:20.676311 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:14:20.676278 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc7e6367c954303c8fd8cc9e861a2d831f0d2daea551d9bfa9c9bda9e87ef726\": container with ID starting with bc7e6367c954303c8fd8cc9e861a2d831f0d2daea551d9bfa9c9bda9e87ef726 not found: ID does not exist" containerID="bc7e6367c954303c8fd8cc9e861a2d831f0d2daea551d9bfa9c9bda9e87ef726" Apr 16 18:14:20.676403 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:20.676315 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc7e6367c954303c8fd8cc9e861a2d831f0d2daea551d9bfa9c9bda9e87ef726"} err="failed to get container status \"bc7e6367c954303c8fd8cc9e861a2d831f0d2daea551d9bfa9c9bda9e87ef726\": rpc error: code = NotFound desc = could not find container \"bc7e6367c954303c8fd8cc9e861a2d831f0d2daea551d9bfa9c9bda9e87ef726\": container with ID starting with bc7e6367c954303c8fd8cc9e861a2d831f0d2daea551d9bfa9c9bda9e87ef726 not found: ID does not exist" Apr 16 18:14:20.691896 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:20.691876 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve/kserve-controller-manager-659c8cbdc-vvs7b"] Apr 16 18:14:20.698015 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:20.697994 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve/kserve-controller-manager-659c8cbdc-vvs7b"] Apr 16 18:14:20.713802 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:20.713778 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1883e66b-0a8c-478b-a2f7-f8dac57722a3" path="/var/lib/kubelet/pods/1883e66b-0a8c-478b-a2f7-f8dac57722a3/volumes" Apr 16 18:14:21.672213 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:21.672175 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-659c8cbdc-bggzs" event={"ID":"55c50b9d-6894-48c1-84f2-00aa20a103d1","Type":"ContainerStarted","Data":"fdb1b3e2dd312a2789757d332643bda2269c1da83c2aac20b10b099a4ae535f4"} Apr 16 18:14:21.672638 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:21.672296 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/kserve-controller-manager-659c8cbdc-bggzs" Apr 16 18:14:21.691700 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:21.691653 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/kserve-controller-manager-659c8cbdc-bggzs" podStartSLOduration=1.3427253270000001 podStartE2EDuration="1.691639639s" podCreationTimestamp="2026-04-16 18:14:20 +0000 UTC" firstStartedPulling="2026-04-16 18:14:20.6565577 +0000 UTC m=+718.485520811" lastFinishedPulling="2026-04-16 18:14:21.005472007 +0000 UTC m=+718.834435123" observedRunningTime="2026-04-16 18:14:21.690402376 +0000 UTC m=+719.519365509" watchObservedRunningTime="2026-04-16 18:14:21.691639639 +0000 UTC m=+719.520602796" Apr 16 18:14:52.681526 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:52.681491 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/kserve-controller-manager-659c8cbdc-bggzs" Apr 16 18:14:53.575634 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:53.575584 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/odh-model-controller-696fc77849-8bmzs"] Apr 16 18:14:53.575898 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:53.575886 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1883e66b-0a8c-478b-a2f7-f8dac57722a3" containerName="manager" Apr 16 18:14:53.575940 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:53.575899 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="1883e66b-0a8c-478b-a2f7-f8dac57722a3" containerName="manager" Apr 16 18:14:53.575977 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:53.575962 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="1883e66b-0a8c-478b-a2f7-f8dac57722a3" containerName="manager" Apr 16 18:14:53.577778 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:53.577761 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/odh-model-controller-696fc77849-8bmzs" Apr 16 18:14:53.580438 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:53.580401 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"odh-model-controller-dockercfg-hkmnq\"" Apr 16 18:14:53.581124 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:53.580761 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"odh-model-controller-webhook-cert\"" Apr 16 18:14:53.590275 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:53.590250 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/odh-model-controller-696fc77849-8bmzs"] Apr 16 18:14:53.627960 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:53.627921 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nqxr\" (UniqueName: \"kubernetes.io/projected/c40caa73-7b22-4452-9860-7a9af77112b5-kube-api-access-9nqxr\") pod \"odh-model-controller-696fc77849-8bmzs\" (UID: \"c40caa73-7b22-4452-9860-7a9af77112b5\") " pod="kserve/odh-model-controller-696fc77849-8bmzs" Apr 16 18:14:53.627960 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:53.627953 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c40caa73-7b22-4452-9860-7a9af77112b5-cert\") pod \"odh-model-controller-696fc77849-8bmzs\" (UID: \"c40caa73-7b22-4452-9860-7a9af77112b5\") " pod="kserve/odh-model-controller-696fc77849-8bmzs" Apr 16 18:14:53.728282 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:53.728243 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9nqxr\" (UniqueName: \"kubernetes.io/projected/c40caa73-7b22-4452-9860-7a9af77112b5-kube-api-access-9nqxr\") pod \"odh-model-controller-696fc77849-8bmzs\" (UID: \"c40caa73-7b22-4452-9860-7a9af77112b5\") " pod="kserve/odh-model-controller-696fc77849-8bmzs" Apr 16 18:14:53.728282 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:53.728285 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c40caa73-7b22-4452-9860-7a9af77112b5-cert\") pod \"odh-model-controller-696fc77849-8bmzs\" (UID: \"c40caa73-7b22-4452-9860-7a9af77112b5\") " pod="kserve/odh-model-controller-696fc77849-8bmzs" Apr 16 18:14:53.728726 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:14:53.728456 2571 secret.go:189] Couldn't get secret kserve/odh-model-controller-webhook-cert: secret "odh-model-controller-webhook-cert" not found Apr 16 18:14:53.728726 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:14:53.728517 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c40caa73-7b22-4452-9860-7a9af77112b5-cert podName:c40caa73-7b22-4452-9860-7a9af77112b5 nodeName:}" failed. No retries permitted until 2026-04-16 18:14:54.228501184 +0000 UTC m=+752.057464295 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c40caa73-7b22-4452-9860-7a9af77112b5-cert") pod "odh-model-controller-696fc77849-8bmzs" (UID: "c40caa73-7b22-4452-9860-7a9af77112b5") : secret "odh-model-controller-webhook-cert" not found Apr 16 18:14:53.740724 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:53.740693 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nqxr\" (UniqueName: \"kubernetes.io/projected/c40caa73-7b22-4452-9860-7a9af77112b5-kube-api-access-9nqxr\") pod \"odh-model-controller-696fc77849-8bmzs\" (UID: \"c40caa73-7b22-4452-9860-7a9af77112b5\") " pod="kserve/odh-model-controller-696fc77849-8bmzs" Apr 16 18:14:54.232324 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:54.232290 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c40caa73-7b22-4452-9860-7a9af77112b5-cert\") pod \"odh-model-controller-696fc77849-8bmzs\" (UID: \"c40caa73-7b22-4452-9860-7a9af77112b5\") " pod="kserve/odh-model-controller-696fc77849-8bmzs" Apr 16 18:14:54.234831 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:54.234804 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c40caa73-7b22-4452-9860-7a9af77112b5-cert\") pod \"odh-model-controller-696fc77849-8bmzs\" (UID: \"c40caa73-7b22-4452-9860-7a9af77112b5\") " pod="kserve/odh-model-controller-696fc77849-8bmzs" Apr 16 18:14:54.493795 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:54.493694 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/odh-model-controller-696fc77849-8bmzs" Apr 16 18:14:54.612915 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:54.612883 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/odh-model-controller-696fc77849-8bmzs"] Apr 16 18:14:54.615960 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:14:54.615929 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc40caa73_7b22_4452_9860_7a9af77112b5.slice/crio-f2bceee4f8941e4d154695ba86c7958414eb2f6d715a5e89748b9cfc997a0087 WatchSource:0}: Error finding container f2bceee4f8941e4d154695ba86c7958414eb2f6d715a5e89748b9cfc997a0087: Status 404 returned error can't find the container with id f2bceee4f8941e4d154695ba86c7958414eb2f6d715a5e89748b9cfc997a0087 Apr 16 18:14:54.758824 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:54.758737 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/odh-model-controller-696fc77849-8bmzs" event={"ID":"c40caa73-7b22-4452-9860-7a9af77112b5","Type":"ContainerStarted","Data":"f2bceee4f8941e4d154695ba86c7958414eb2f6d715a5e89748b9cfc997a0087"} Apr 16 18:14:57.769302 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:57.769271 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/odh-model-controller-696fc77849-8bmzs" event={"ID":"c40caa73-7b22-4452-9860-7a9af77112b5","Type":"ContainerStarted","Data":"5366d3b4022f1167aa18d9df33a0a1fd4ea0d8b38e21aec478caba920c8ba028"} Apr 16 18:14:57.769728 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:57.769419 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/odh-model-controller-696fc77849-8bmzs" Apr 16 18:14:57.799217 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:14:57.799156 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/odh-model-controller-696fc77849-8bmzs" podStartSLOduration=2.458278229 podStartE2EDuration="4.799137364s" podCreationTimestamp="2026-04-16 18:14:53 +0000 UTC" firstStartedPulling="2026-04-16 18:14:54.617091747 +0000 UTC m=+752.446054859" lastFinishedPulling="2026-04-16 18:14:56.95795087 +0000 UTC m=+754.786913994" observedRunningTime="2026-04-16 18:14:57.798335644 +0000 UTC m=+755.627298776" watchObservedRunningTime="2026-04-16 18:14:57.799137364 +0000 UTC m=+755.628100500" Apr 16 18:15:08.773843 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:15:08.773812 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/odh-model-controller-696fc77849-8bmzs" Apr 16 18:15:30.239564 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:15:30.239530 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-390f9-predictor-5bdd998c9c-dxsnc"] Apr 16 18:15:30.242156 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:15:30.242139 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-390f9-predictor-5bdd998c9c-dxsnc" Apr 16 18:15:30.244871 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:15:30.244852 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"default-dockercfg-zqf8f\"" Apr 16 18:15:30.251605 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:15:30.251554 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-390f9-predictor-5bdd998c9c-dxsnc"] Apr 16 18:15:30.252807 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:15:30.252788 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-390f9-predictor-5bdd998c9c-dxsnc" Apr 16 18:15:30.411798 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:15:30.411763 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-390f9-predictor-5bdd998c9c-dxsnc"] Apr 16 18:15:30.415083 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:15:30.415051 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda28f8fdd_ca6f_4fd5_9f84_66cc20786923.slice/crio-76f018a7f152d8ff22c61e195c0aea7132e8e7ecd2f63cf333356227b06c96c8 WatchSource:0}: Error finding container 76f018a7f152d8ff22c61e195c0aea7132e8e7ecd2f63cf333356227b06c96c8: Status 404 returned error can't find the container with id 76f018a7f152d8ff22c61e195c0aea7132e8e7ecd2f63cf333356227b06c96c8 Apr 16 18:15:30.866530 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:15:30.866486 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-390f9-predictor-5bdd998c9c-dxsnc" event={"ID":"a28f8fdd-ca6f-4fd5-9f84-66cc20786923","Type":"ContainerStarted","Data":"76f018a7f152d8ff22c61e195c0aea7132e8e7ecd2f63cf333356227b06c96c8"} Apr 16 18:15:42.908918 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:15:42.908880 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-390f9-predictor-5bdd998c9c-dxsnc" event={"ID":"a28f8fdd-ca6f-4fd5-9f84-66cc20786923","Type":"ContainerStarted","Data":"ec6c31a4b1fa9a32de90e1ee763263680bc7cf8457213be8e721f927898b6541"} Apr 16 18:15:42.909344 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:15:42.909222 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-390f9-predictor-5bdd998c9c-dxsnc" Apr 16 18:15:42.910715 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:15:42.910686 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-390f9-predictor-5bdd998c9c-dxsnc" podUID="a28f8fdd-ca6f-4fd5-9f84-66cc20786923" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.21:8080: connect: connection refused" Apr 16 18:15:42.925333 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:15:42.925281 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-390f9-predictor-5bdd998c9c-dxsnc" podStartSLOduration=1.302278826 podStartE2EDuration="12.925262079s" podCreationTimestamp="2026-04-16 18:15:30 +0000 UTC" firstStartedPulling="2026-04-16 18:15:30.41722456 +0000 UTC m=+788.246187678" lastFinishedPulling="2026-04-16 18:15:42.040207799 +0000 UTC m=+799.869170931" observedRunningTime="2026-04-16 18:15:42.923173481 +0000 UTC m=+800.752136614" watchObservedRunningTime="2026-04-16 18:15:42.925262079 +0000 UTC m=+800.754225215" Apr 16 18:15:43.912948 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:15:43.912908 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-390f9-predictor-5bdd998c9c-dxsnc" podUID="a28f8fdd-ca6f-4fd5-9f84-66cc20786923" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.21:8080: connect: connection refused" Apr 16 18:15:53.913801 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:15:53.913755 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-390f9-predictor-5bdd998c9c-dxsnc" podUID="a28f8fdd-ca6f-4fd5-9f84-66cc20786923" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.21:8080: connect: connection refused" Apr 16 18:16:03.913376 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:16:03.913326 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-390f9-predictor-5bdd998c9c-dxsnc" podUID="a28f8fdd-ca6f-4fd5-9f84-66cc20786923" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.21:8080: connect: connection refused" Apr 16 18:16:13.913866 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:16:13.913819 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-390f9-predictor-5bdd998c9c-dxsnc" podUID="a28f8fdd-ca6f-4fd5-9f84-66cc20786923" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.21:8080: connect: connection refused" Apr 16 18:16:23.913723 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:16:23.913666 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-390f9-predictor-5bdd998c9c-dxsnc" podUID="a28f8fdd-ca6f-4fd5-9f84-66cc20786923" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.21:8080: connect: connection refused" Apr 16 18:16:33.914199 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:16:33.914169 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-390f9-predictor-5bdd998c9c-dxsnc" Apr 16 18:16:50.286215 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:16:50.286175 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/switch-graph-390f9-67fbf8748c-gm76t"] Apr 16 18:16:50.289841 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:16:50.289816 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-390f9-67fbf8748c-gm76t" Apr 16 18:16:50.291979 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:16:50.291947 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"openshift-service-ca.crt\"" Apr 16 18:16:50.291979 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:16:50.291958 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"switch-graph-390f9-kube-rbac-proxy-sar-config\"" Apr 16 18:16:50.292167 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:16:50.292161 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"switch-graph-390f9-serving-cert\"" Apr 16 18:16:50.299137 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:16:50.299111 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/switch-graph-390f9-67fbf8748c-gm76t"] Apr 16 18:16:50.433076 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:16:50.433012 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37b7bbd3-8d23-42c5-b721-78044f2e91a8-proxy-tls\") pod \"switch-graph-390f9-67fbf8748c-gm76t\" (UID: \"37b7bbd3-8d23-42c5-b721-78044f2e91a8\") " pod="kserve-ci-e2e-test/switch-graph-390f9-67fbf8748c-gm76t" Apr 16 18:16:50.433236 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:16:50.433094 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37b7bbd3-8d23-42c5-b721-78044f2e91a8-openshift-service-ca-bundle\") pod \"switch-graph-390f9-67fbf8748c-gm76t\" (UID: \"37b7bbd3-8d23-42c5-b721-78044f2e91a8\") " pod="kserve-ci-e2e-test/switch-graph-390f9-67fbf8748c-gm76t" Apr 16 18:16:50.533469 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:16:50.533439 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37b7bbd3-8d23-42c5-b721-78044f2e91a8-proxy-tls\") pod \"switch-graph-390f9-67fbf8748c-gm76t\" (UID: \"37b7bbd3-8d23-42c5-b721-78044f2e91a8\") " pod="kserve-ci-e2e-test/switch-graph-390f9-67fbf8748c-gm76t" Apr 16 18:16:50.533641 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:16:50.533488 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37b7bbd3-8d23-42c5-b721-78044f2e91a8-openshift-service-ca-bundle\") pod \"switch-graph-390f9-67fbf8748c-gm76t\" (UID: \"37b7bbd3-8d23-42c5-b721-78044f2e91a8\") " pod="kserve-ci-e2e-test/switch-graph-390f9-67fbf8748c-gm76t" Apr 16 18:16:50.533641 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:16:50.533604 2571 secret.go:189] Couldn't get secret kserve-ci-e2e-test/switch-graph-390f9-serving-cert: secret "switch-graph-390f9-serving-cert" not found Apr 16 18:16:50.533732 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:16:50.533676 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37b7bbd3-8d23-42c5-b721-78044f2e91a8-proxy-tls podName:37b7bbd3-8d23-42c5-b721-78044f2e91a8 nodeName:}" failed. No retries permitted until 2026-04-16 18:16:51.033656576 +0000 UTC m=+868.862619694 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/37b7bbd3-8d23-42c5-b721-78044f2e91a8-proxy-tls") pod "switch-graph-390f9-67fbf8748c-gm76t" (UID: "37b7bbd3-8d23-42c5-b721-78044f2e91a8") : secret "switch-graph-390f9-serving-cert" not found Apr 16 18:16:50.534083 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:16:50.534067 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37b7bbd3-8d23-42c5-b721-78044f2e91a8-openshift-service-ca-bundle\") pod \"switch-graph-390f9-67fbf8748c-gm76t\" (UID: \"37b7bbd3-8d23-42c5-b721-78044f2e91a8\") " pod="kserve-ci-e2e-test/switch-graph-390f9-67fbf8748c-gm76t" Apr 16 18:16:51.037828 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:16:51.037777 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37b7bbd3-8d23-42c5-b721-78044f2e91a8-proxy-tls\") pod \"switch-graph-390f9-67fbf8748c-gm76t\" (UID: \"37b7bbd3-8d23-42c5-b721-78044f2e91a8\") " pod="kserve-ci-e2e-test/switch-graph-390f9-67fbf8748c-gm76t" Apr 16 18:16:51.040351 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:16:51.040323 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37b7bbd3-8d23-42c5-b721-78044f2e91a8-proxy-tls\") pod \"switch-graph-390f9-67fbf8748c-gm76t\" (UID: \"37b7bbd3-8d23-42c5-b721-78044f2e91a8\") " pod="kserve-ci-e2e-test/switch-graph-390f9-67fbf8748c-gm76t" Apr 16 18:16:51.200329 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:16:51.200287 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-390f9-67fbf8748c-gm76t" Apr 16 18:16:51.321051 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:16:51.321022 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/switch-graph-390f9-67fbf8748c-gm76t"] Apr 16 18:16:51.323624 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:16:51.323581 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37b7bbd3_8d23_42c5_b721_78044f2e91a8.slice/crio-ad99c150821362a20b789ff067885c9d7158b1097b11c3f4910016ab326cb1ab WatchSource:0}: Error finding container ad99c150821362a20b789ff067885c9d7158b1097b11c3f4910016ab326cb1ab: Status 404 returned error can't find the container with id ad99c150821362a20b789ff067885c9d7158b1097b11c3f4910016ab326cb1ab Apr 16 18:16:51.325336 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:16:51.325318 2571 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 18:16:52.101876 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:16:52.101834 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-390f9-67fbf8748c-gm76t" event={"ID":"37b7bbd3-8d23-42c5-b721-78044f2e91a8","Type":"ContainerStarted","Data":"ad99c150821362a20b789ff067885c9d7158b1097b11c3f4910016ab326cb1ab"} Apr 16 18:16:54.111673 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:16:54.111631 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-390f9-67fbf8748c-gm76t" event={"ID":"37b7bbd3-8d23-42c5-b721-78044f2e91a8","Type":"ContainerStarted","Data":"54576d882b211444c3201713ec8b9d123993da9226d90e7c8c958ae5a1fe920f"} Apr 16 18:16:54.112136 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:16:54.111765 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/switch-graph-390f9-67fbf8748c-gm76t" Apr 16 18:16:54.129057 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:16:54.129009 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/switch-graph-390f9-67fbf8748c-gm76t" podStartSLOduration=1.717103378 podStartE2EDuration="4.128993868s" podCreationTimestamp="2026-04-16 18:16:50 +0000 UTC" firstStartedPulling="2026-04-16 18:16:51.325444079 +0000 UTC m=+869.154407191" lastFinishedPulling="2026-04-16 18:16:53.737334566 +0000 UTC m=+871.566297681" observedRunningTime="2026-04-16 18:16:54.12741229 +0000 UTC m=+871.956375424" watchObservedRunningTime="2026-04-16 18:16:54.128993868 +0000 UTC m=+871.957957001" Apr 16 18:17:00.120769 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:00.120743 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/switch-graph-390f9-67fbf8748c-gm76t" Apr 16 18:17:00.481418 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:00.481340 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/switch-graph-390f9-67fbf8748c-gm76t"] Apr 16 18:17:00.481665 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:00.481635 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/switch-graph-390f9-67fbf8748c-gm76t" podUID="37b7bbd3-8d23-42c5-b721-78044f2e91a8" containerName="switch-graph-390f9" containerID="cri-o://54576d882b211444c3201713ec8b9d123993da9226d90e7c8c958ae5a1fe920f" gracePeriod=30 Apr 16 18:17:00.596063 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:00.596026 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-390f9-predictor-5bdd998c9c-dxsnc"] Apr 16 18:17:00.596378 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:00.596331 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-390f9-predictor-5bdd998c9c-dxsnc" podUID="a28f8fdd-ca6f-4fd5-9f84-66cc20786923" containerName="kserve-container" containerID="cri-o://ec6c31a4b1fa9a32de90e1ee763263680bc7cf8457213be8e721f927898b6541" gracePeriod=30 Apr 16 18:17:00.652956 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:00.652916 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-04c3d-predictor-6c8787dd55-x9kf9"] Apr 16 18:17:00.656351 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:00.656329 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-04c3d-predictor-6c8787dd55-x9kf9" Apr 16 18:17:00.675248 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:00.675218 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-04c3d-predictor-6c8787dd55-x9kf9"] Apr 16 18:17:00.683938 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:00.683914 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-04c3d-predictor-6c8787dd55-x9kf9" Apr 16 18:17:00.819141 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:00.819113 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-04c3d-predictor-6c8787dd55-x9kf9"] Apr 16 18:17:00.820889 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:17:00.820811 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61350c93_dd52_4dca_b279_01c128b89c20.slice/crio-7129be752e8622ded091e0d13d1b5ef1cb9434430161f3e3450a653b1a061016 WatchSource:0}: Error finding container 7129be752e8622ded091e0d13d1b5ef1cb9434430161f3e3450a653b1a061016: Status 404 returned error can't find the container with id 7129be752e8622ded091e0d13d1b5ef1cb9434430161f3e3450a653b1a061016 Apr 16 18:17:01.131912 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:01.131865 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-04c3d-predictor-6c8787dd55-x9kf9" event={"ID":"61350c93-dd52-4dca-b279-01c128b89c20","Type":"ContainerStarted","Data":"7d713d3b626c468d1d3b83aeff5689833df1789ae87ce7ae32d8cfbeea23ca76"} Apr 16 18:17:01.131912 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:01.131913 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-04c3d-predictor-6c8787dd55-x9kf9" event={"ID":"61350c93-dd52-4dca-b279-01c128b89c20","Type":"ContainerStarted","Data":"7129be752e8622ded091e0d13d1b5ef1cb9434430161f3e3450a653b1a061016"} Apr 16 18:17:01.132412 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:01.132026 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-04c3d-predictor-6c8787dd55-x9kf9" Apr 16 18:17:01.133401 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:01.133376 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-04c3d-predictor-6c8787dd55-x9kf9" podUID="61350c93-dd52-4dca-b279-01c128b89c20" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.23:8080: connect: connection refused" Apr 16 18:17:01.147383 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:01.147337 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-04c3d-predictor-6c8787dd55-x9kf9" podStartSLOduration=1.147323523 podStartE2EDuration="1.147323523s" podCreationTimestamp="2026-04-16 18:17:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:17:01.146323336 +0000 UTC m=+878.975286486" watchObservedRunningTime="2026-04-16 18:17:01.147323523 +0000 UTC m=+878.976286655" Apr 16 18:17:02.135166 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:02.135127 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-04c3d-predictor-6c8787dd55-x9kf9" podUID="61350c93-dd52-4dca-b279-01c128b89c20" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.23:8080: connect: connection refused" Apr 16 18:17:03.939933 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:03.939901 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-390f9-predictor-5bdd998c9c-dxsnc" Apr 16 18:17:04.141653 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:04.141614 2571 generic.go:358] "Generic (PLEG): container finished" podID="a28f8fdd-ca6f-4fd5-9f84-66cc20786923" containerID="ec6c31a4b1fa9a32de90e1ee763263680bc7cf8457213be8e721f927898b6541" exitCode=0 Apr 16 18:17:04.141818 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:04.141672 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-390f9-predictor-5bdd998c9c-dxsnc" event={"ID":"a28f8fdd-ca6f-4fd5-9f84-66cc20786923","Type":"ContainerDied","Data":"ec6c31a4b1fa9a32de90e1ee763263680bc7cf8457213be8e721f927898b6541"} Apr 16 18:17:04.141818 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:04.141678 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-390f9-predictor-5bdd998c9c-dxsnc" Apr 16 18:17:04.141818 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:04.141701 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-390f9-predictor-5bdd998c9c-dxsnc" event={"ID":"a28f8fdd-ca6f-4fd5-9f84-66cc20786923","Type":"ContainerDied","Data":"76f018a7f152d8ff22c61e195c0aea7132e8e7ecd2f63cf333356227b06c96c8"} Apr 16 18:17:04.141818 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:04.141720 2571 scope.go:117] "RemoveContainer" containerID="ec6c31a4b1fa9a32de90e1ee763263680bc7cf8457213be8e721f927898b6541" Apr 16 18:17:04.156534 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:04.156510 2571 scope.go:117] "RemoveContainer" containerID="ec6c31a4b1fa9a32de90e1ee763263680bc7cf8457213be8e721f927898b6541" Apr 16 18:17:04.156882 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:17:04.156860 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec6c31a4b1fa9a32de90e1ee763263680bc7cf8457213be8e721f927898b6541\": container with ID starting with ec6c31a4b1fa9a32de90e1ee763263680bc7cf8457213be8e721f927898b6541 not found: ID does not exist" containerID="ec6c31a4b1fa9a32de90e1ee763263680bc7cf8457213be8e721f927898b6541" Apr 16 18:17:04.156974 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:04.156889 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec6c31a4b1fa9a32de90e1ee763263680bc7cf8457213be8e721f927898b6541"} err="failed to get container status \"ec6c31a4b1fa9a32de90e1ee763263680bc7cf8457213be8e721f927898b6541\": rpc error: code = NotFound desc = could not find container \"ec6c31a4b1fa9a32de90e1ee763263680bc7cf8457213be8e721f927898b6541\": container with ID starting with ec6c31a4b1fa9a32de90e1ee763263680bc7cf8457213be8e721f927898b6541 not found: ID does not exist" Apr 16 18:17:04.165851 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:04.165824 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-390f9-predictor-5bdd998c9c-dxsnc"] Apr 16 18:17:04.169325 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:04.169299 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-390f9-predictor-5bdd998c9c-dxsnc"] Apr 16 18:17:04.714473 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:04.714439 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a28f8fdd-ca6f-4fd5-9f84-66cc20786923" path="/var/lib/kubelet/pods/a28f8fdd-ca6f-4fd5-9f84-66cc20786923/volumes" Apr 16 18:17:04.914135 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:04.914089 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-390f9-predictor-5bdd998c9c-dxsnc" podUID="a28f8fdd-ca6f-4fd5-9f84-66cc20786923" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.21:8080: i/o timeout" Apr 16 18:17:05.120324 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:05.120281 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-390f9-67fbf8748c-gm76t" podUID="37b7bbd3-8d23-42c5-b721-78044f2e91a8" containerName="switch-graph-390f9" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:17:10.119181 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:10.119143 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-390f9-67fbf8748c-gm76t" podUID="37b7bbd3-8d23-42c5-b721-78044f2e91a8" containerName="switch-graph-390f9" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:17:12.135531 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:12.135488 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-04c3d-predictor-6c8787dd55-x9kf9" podUID="61350c93-dd52-4dca-b279-01c128b89c20" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.23:8080: connect: connection refused" Apr 16 18:17:15.119853 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:15.119813 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-390f9-67fbf8748c-gm76t" podUID="37b7bbd3-8d23-42c5-b721-78044f2e91a8" containerName="switch-graph-390f9" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:17:15.120222 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:15.119919 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/switch-graph-390f9-67fbf8748c-gm76t" Apr 16 18:17:20.119954 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:20.119912 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-390f9-67fbf8748c-gm76t" podUID="37b7bbd3-8d23-42c5-b721-78044f2e91a8" containerName="switch-graph-390f9" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:17:20.295351 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:20.295318 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/model-chainer-77968cf4b8-qdr99"] Apr 16 18:17:20.295677 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:20.295664 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a28f8fdd-ca6f-4fd5-9f84-66cc20786923" containerName="kserve-container" Apr 16 18:17:20.295721 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:20.295679 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="a28f8fdd-ca6f-4fd5-9f84-66cc20786923" containerName="kserve-container" Apr 16 18:17:20.295767 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:20.295745 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="a28f8fdd-ca6f-4fd5-9f84-66cc20786923" containerName="kserve-container" Apr 16 18:17:20.297528 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:20.297503 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-77968cf4b8-qdr99" Apr 16 18:17:20.299690 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:20.299661 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"model-chainer-kube-rbac-proxy-sar-config\"" Apr 16 18:17:20.299900 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:20.299885 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"model-chainer-serving-cert\"" Apr 16 18:17:20.309359 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:20.309338 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/model-chainer-77968cf4b8-qdr99"] Apr 16 18:17:20.369580 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:20.369538 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/530f1448-da66-4015-8ce1-e8ea6e692cea-proxy-tls\") pod \"model-chainer-77968cf4b8-qdr99\" (UID: \"530f1448-da66-4015-8ce1-e8ea6e692cea\") " pod="kserve-ci-e2e-test/model-chainer-77968cf4b8-qdr99" Apr 16 18:17:20.369580 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:20.369578 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/530f1448-da66-4015-8ce1-e8ea6e692cea-openshift-service-ca-bundle\") pod \"model-chainer-77968cf4b8-qdr99\" (UID: \"530f1448-da66-4015-8ce1-e8ea6e692cea\") " pod="kserve-ci-e2e-test/model-chainer-77968cf4b8-qdr99" Apr 16 18:17:20.470520 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:20.470435 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/530f1448-da66-4015-8ce1-e8ea6e692cea-proxy-tls\") pod \"model-chainer-77968cf4b8-qdr99\" (UID: \"530f1448-da66-4015-8ce1-e8ea6e692cea\") " pod="kserve-ci-e2e-test/model-chainer-77968cf4b8-qdr99" Apr 16 18:17:20.470520 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:20.470480 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/530f1448-da66-4015-8ce1-e8ea6e692cea-openshift-service-ca-bundle\") pod \"model-chainer-77968cf4b8-qdr99\" (UID: \"530f1448-da66-4015-8ce1-e8ea6e692cea\") " pod="kserve-ci-e2e-test/model-chainer-77968cf4b8-qdr99" Apr 16 18:17:20.471189 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:20.471159 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/530f1448-da66-4015-8ce1-e8ea6e692cea-openshift-service-ca-bundle\") pod \"model-chainer-77968cf4b8-qdr99\" (UID: \"530f1448-da66-4015-8ce1-e8ea6e692cea\") " pod="kserve-ci-e2e-test/model-chainer-77968cf4b8-qdr99" Apr 16 18:17:20.472848 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:20.472827 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/530f1448-da66-4015-8ce1-e8ea6e692cea-proxy-tls\") pod \"model-chainer-77968cf4b8-qdr99\" (UID: \"530f1448-da66-4015-8ce1-e8ea6e692cea\") " pod="kserve-ci-e2e-test/model-chainer-77968cf4b8-qdr99" Apr 16 18:17:20.607108 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:20.607076 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-77968cf4b8-qdr99" Apr 16 18:17:20.724719 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:20.724629 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/model-chainer-77968cf4b8-qdr99"] Apr 16 18:17:20.728427 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:17:20.728400 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod530f1448_da66_4015_8ce1_e8ea6e692cea.slice/crio-94f929c06043dbe466a8ad14e0af0706b1574c0a490e5fb57c4936d75cf6149d WatchSource:0}: Error finding container 94f929c06043dbe466a8ad14e0af0706b1574c0a490e5fb57c4936d75cf6149d: Status 404 returned error can't find the container with id 94f929c06043dbe466a8ad14e0af0706b1574c0a490e5fb57c4936d75cf6149d Apr 16 18:17:21.193939 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:21.193898 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-77968cf4b8-qdr99" event={"ID":"530f1448-da66-4015-8ce1-e8ea6e692cea","Type":"ContainerStarted","Data":"de9a6b30096fcbbf4c6ca6613317ff2579ac67272b2d37caab1bbcfb7d360112"} Apr 16 18:17:21.193939 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:21.193938 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-77968cf4b8-qdr99" event={"ID":"530f1448-da66-4015-8ce1-e8ea6e692cea","Type":"ContainerStarted","Data":"94f929c06043dbe466a8ad14e0af0706b1574c0a490e5fb57c4936d75cf6149d"} Apr 16 18:17:21.194399 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:21.194030 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/model-chainer-77968cf4b8-qdr99" Apr 16 18:17:21.209738 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:21.209685 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/model-chainer-77968cf4b8-qdr99" podStartSLOduration=1.209667269 podStartE2EDuration="1.209667269s" podCreationTimestamp="2026-04-16 18:17:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:17:21.209343388 +0000 UTC m=+899.038306522" watchObservedRunningTime="2026-04-16 18:17:21.209667269 +0000 UTC m=+899.038630467" Apr 16 18:17:22.136169 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:22.136123 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-04c3d-predictor-6c8787dd55-x9kf9" podUID="61350c93-dd52-4dca-b279-01c128b89c20" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.23:8080: connect: connection refused" Apr 16 18:17:25.119549 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:25.119505 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-390f9-67fbf8748c-gm76t" podUID="37b7bbd3-8d23-42c5-b721-78044f2e91a8" containerName="switch-graph-390f9" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:17:27.202843 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:27.202816 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/model-chainer-77968cf4b8-qdr99" Apr 16 18:17:30.119037 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:30.119003 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-390f9-67fbf8748c-gm76t" podUID="37b7bbd3-8d23-42c5-b721-78044f2e91a8" containerName="switch-graph-390f9" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:17:30.390936 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:30.390859 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/model-chainer-77968cf4b8-qdr99"] Apr 16 18:17:30.391102 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:30.391063 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/model-chainer-77968cf4b8-qdr99" podUID="530f1448-da66-4015-8ce1-e8ea6e692cea" containerName="model-chainer" containerID="cri-o://de9a6b30096fcbbf4c6ca6613317ff2579ac67272b2d37caab1bbcfb7d360112" gracePeriod=30 Apr 16 18:17:30.510968 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:17:30.510928 2571 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37b7bbd3_8d23_42c5_b721_78044f2e91a8.slice/crio-conmon-54576d882b211444c3201713ec8b9d123993da9226d90e7c8c958ae5a1fe920f.scope\": RecentStats: unable to find data in memory cache]" Apr 16 18:17:30.580656 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:30.580625 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-0403e-predictor-5bfb47d58c-d59t9"] Apr 16 18:17:30.584908 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:30.584885 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-0403e-predictor-5bfb47d58c-d59t9" Apr 16 18:17:30.595717 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:30.595691 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-0403e-predictor-5bfb47d58c-d59t9"] Apr 16 18:17:30.595866 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:30.595847 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-0403e-predictor-5bfb47d58c-d59t9" Apr 16 18:17:30.759609 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:30.759569 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-0403e-predictor-5bfb47d58c-d59t9"] Apr 16 18:17:30.762631 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:17:30.762599 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3993f680_7849_4f19_9b81_85dd3c228e72.slice/crio-5ee097ccd6f8ed918a0954318d2e0059a4e473327f2814499b3b7f02f876a6e4 WatchSource:0}: Error finding container 5ee097ccd6f8ed918a0954318d2e0059a4e473327f2814499b3b7f02f876a6e4: Status 404 returned error can't find the container with id 5ee097ccd6f8ed918a0954318d2e0059a4e473327f2814499b3b7f02f876a6e4 Apr 16 18:17:31.115500 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:31.115478 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-390f9-67fbf8748c-gm76t" Apr 16 18:17:31.156917 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:31.156884 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37b7bbd3-8d23-42c5-b721-78044f2e91a8-openshift-service-ca-bundle\") pod \"37b7bbd3-8d23-42c5-b721-78044f2e91a8\" (UID: \"37b7bbd3-8d23-42c5-b721-78044f2e91a8\") " Apr 16 18:17:31.157362 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:31.156925 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37b7bbd3-8d23-42c5-b721-78044f2e91a8-proxy-tls\") pod \"37b7bbd3-8d23-42c5-b721-78044f2e91a8\" (UID: \"37b7bbd3-8d23-42c5-b721-78044f2e91a8\") " Apr 16 18:17:31.157362 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:31.157265 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37b7bbd3-8d23-42c5-b721-78044f2e91a8-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "37b7bbd3-8d23-42c5-b721-78044f2e91a8" (UID: "37b7bbd3-8d23-42c5-b721-78044f2e91a8"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 18:17:31.159127 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:31.159100 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37b7bbd3-8d23-42c5-b721-78044f2e91a8-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "37b7bbd3-8d23-42c5-b721-78044f2e91a8" (UID: "37b7bbd3-8d23-42c5-b721-78044f2e91a8"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 18:17:31.224928 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:31.224895 2571 generic.go:358] "Generic (PLEG): container finished" podID="37b7bbd3-8d23-42c5-b721-78044f2e91a8" containerID="54576d882b211444c3201713ec8b9d123993da9226d90e7c8c958ae5a1fe920f" exitCode=0 Apr 16 18:17:31.225092 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:31.224968 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-390f9-67fbf8748c-gm76t" Apr 16 18:17:31.225092 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:31.224969 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-390f9-67fbf8748c-gm76t" event={"ID":"37b7bbd3-8d23-42c5-b721-78044f2e91a8","Type":"ContainerDied","Data":"54576d882b211444c3201713ec8b9d123993da9226d90e7c8c958ae5a1fe920f"} Apr 16 18:17:31.225092 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:31.225080 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-390f9-67fbf8748c-gm76t" event={"ID":"37b7bbd3-8d23-42c5-b721-78044f2e91a8","Type":"ContainerDied","Data":"ad99c150821362a20b789ff067885c9d7158b1097b11c3f4910016ab326cb1ab"} Apr 16 18:17:31.225230 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:31.225105 2571 scope.go:117] "RemoveContainer" containerID="54576d882b211444c3201713ec8b9d123993da9226d90e7c8c958ae5a1fe920f" Apr 16 18:17:31.226513 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:31.226489 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-0403e-predictor-5bfb47d58c-d59t9" event={"ID":"3993f680-7849-4f19-9b81-85dd3c228e72","Type":"ContainerStarted","Data":"0024ff54a96b4f5878fa53fc0bb9f4c40c7c7d2aa513703bd60b0bad6b045b78"} Apr 16 18:17:31.226669 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:31.226517 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-0403e-predictor-5bfb47d58c-d59t9" event={"ID":"3993f680-7849-4f19-9b81-85dd3c228e72","Type":"ContainerStarted","Data":"5ee097ccd6f8ed918a0954318d2e0059a4e473327f2814499b3b7f02f876a6e4"} Apr 16 18:17:31.226738 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:31.226695 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-0403e-predictor-5bfb47d58c-d59t9" Apr 16 18:17:31.228104 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:31.228072 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-0403e-predictor-5bfb47d58c-d59t9" podUID="3993f680-7849-4f19-9b81-85dd3c228e72" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.25:8080: connect: connection refused" Apr 16 18:17:31.234946 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:31.234931 2571 scope.go:117] "RemoveContainer" containerID="54576d882b211444c3201713ec8b9d123993da9226d90e7c8c958ae5a1fe920f" Apr 16 18:17:31.235189 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:17:31.235171 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54576d882b211444c3201713ec8b9d123993da9226d90e7c8c958ae5a1fe920f\": container with ID starting with 54576d882b211444c3201713ec8b9d123993da9226d90e7c8c958ae5a1fe920f not found: ID does not exist" containerID="54576d882b211444c3201713ec8b9d123993da9226d90e7c8c958ae5a1fe920f" Apr 16 18:17:31.235230 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:31.235199 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54576d882b211444c3201713ec8b9d123993da9226d90e7c8c958ae5a1fe920f"} err="failed to get container status \"54576d882b211444c3201713ec8b9d123993da9226d90e7c8c958ae5a1fe920f\": rpc error: code = NotFound desc = could not find container \"54576d882b211444c3201713ec8b9d123993da9226d90e7c8c958ae5a1fe920f\": container with ID starting with 54576d882b211444c3201713ec8b9d123993da9226d90e7c8c958ae5a1fe920f not found: ID does not exist" Apr 16 18:17:31.244077 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:31.244039 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-0403e-predictor-5bfb47d58c-d59t9" podStartSLOduration=1.244023688 podStartE2EDuration="1.244023688s" podCreationTimestamp="2026-04-16 18:17:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:17:31.243398038 +0000 UTC m=+909.072361170" watchObservedRunningTime="2026-04-16 18:17:31.244023688 +0000 UTC m=+909.072986823" Apr 16 18:17:31.257257 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:31.257184 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/switch-graph-390f9-67fbf8748c-gm76t"] Apr 16 18:17:31.257531 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:31.257500 2571 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37b7bbd3-8d23-42c5-b721-78044f2e91a8-proxy-tls\") on node \"ip-10-0-128-59.ec2.internal\" DevicePath \"\"" Apr 16 18:17:31.257585 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:31.257536 2571 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37b7bbd3-8d23-42c5-b721-78044f2e91a8-openshift-service-ca-bundle\") on node \"ip-10-0-128-59.ec2.internal\" DevicePath \"\"" Apr 16 18:17:31.261263 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:31.261240 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/switch-graph-390f9-67fbf8748c-gm76t"] Apr 16 18:17:32.135649 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:32.135566 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-04c3d-predictor-6c8787dd55-x9kf9" podUID="61350c93-dd52-4dca-b279-01c128b89c20" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.23:8080: connect: connection refused" Apr 16 18:17:32.200955 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:32.200920 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-77968cf4b8-qdr99" podUID="530f1448-da66-4015-8ce1-e8ea6e692cea" containerName="model-chainer" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:17:32.230667 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:32.230626 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-0403e-predictor-5bfb47d58c-d59t9" podUID="3993f680-7849-4f19-9b81-85dd3c228e72" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.25:8080: connect: connection refused" Apr 16 18:17:32.716162 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:32.716126 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37b7bbd3-8d23-42c5-b721-78044f2e91a8" path="/var/lib/kubelet/pods/37b7bbd3-8d23-42c5-b721-78044f2e91a8/volumes" Apr 16 18:17:37.201371 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:37.201333 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-77968cf4b8-qdr99" podUID="530f1448-da66-4015-8ce1-e8ea6e692cea" containerName="model-chainer" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:17:42.135947 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:42.135903 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-04c3d-predictor-6c8787dd55-x9kf9" podUID="61350c93-dd52-4dca-b279-01c128b89c20" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.23:8080: connect: connection refused" Apr 16 18:17:42.201371 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:42.201333 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-77968cf4b8-qdr99" podUID="530f1448-da66-4015-8ce1-e8ea6e692cea" containerName="model-chainer" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:17:42.201533 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:42.201430 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/model-chainer-77968cf4b8-qdr99" Apr 16 18:17:42.230829 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:42.230791 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-0403e-predictor-5bfb47d58c-d59t9" podUID="3993f680-7849-4f19-9b81-85dd3c228e72" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.25:8080: connect: connection refused" Apr 16 18:17:47.201639 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:47.201578 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-77968cf4b8-qdr99" podUID="530f1448-da66-4015-8ce1-e8ea6e692cea" containerName="model-chainer" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:17:52.136457 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:52.136419 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-04c3d-predictor-6c8787dd55-x9kf9" Apr 16 18:17:52.201970 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:52.201934 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-77968cf4b8-qdr99" podUID="530f1448-da66-4015-8ce1-e8ea6e692cea" containerName="model-chainer" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:17:52.231105 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:52.231063 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-0403e-predictor-5bfb47d58c-d59t9" podUID="3993f680-7849-4f19-9b81-85dd3c228e72" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.25:8080: connect: connection refused" Apr 16 18:17:57.201940 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:17:57.201894 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-77968cf4b8-qdr99" podUID="530f1448-da66-4015-8ce1-e8ea6e692cea" containerName="model-chainer" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:18:00.529193 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:00.529169 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-77968cf4b8-qdr99" Apr 16 18:18:00.593362 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:00.593328 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/530f1448-da66-4015-8ce1-e8ea6e692cea-openshift-service-ca-bundle\") pod \"530f1448-da66-4015-8ce1-e8ea6e692cea\" (UID: \"530f1448-da66-4015-8ce1-e8ea6e692cea\") " Apr 16 18:18:00.593550 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:00.593444 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/530f1448-da66-4015-8ce1-e8ea6e692cea-proxy-tls\") pod \"530f1448-da66-4015-8ce1-e8ea6e692cea\" (UID: \"530f1448-da66-4015-8ce1-e8ea6e692cea\") " Apr 16 18:18:00.593740 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:00.593714 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/530f1448-da66-4015-8ce1-e8ea6e692cea-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "530f1448-da66-4015-8ce1-e8ea6e692cea" (UID: "530f1448-da66-4015-8ce1-e8ea6e692cea"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 18:18:00.595866 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:00.595836 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/530f1448-da66-4015-8ce1-e8ea6e692cea-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "530f1448-da66-4015-8ce1-e8ea6e692cea" (UID: "530f1448-da66-4015-8ce1-e8ea6e692cea"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 18:18:00.694554 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:00.694461 2571 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/530f1448-da66-4015-8ce1-e8ea6e692cea-openshift-service-ca-bundle\") on node \"ip-10-0-128-59.ec2.internal\" DevicePath \"\"" Apr 16 18:18:00.694554 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:00.694495 2571 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/530f1448-da66-4015-8ce1-e8ea6e692cea-proxy-tls\") on node \"ip-10-0-128-59.ec2.internal\" DevicePath \"\"" Apr 16 18:18:01.321123 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:01.321089 2571 generic.go:358] "Generic (PLEG): container finished" podID="530f1448-da66-4015-8ce1-e8ea6e692cea" containerID="de9a6b30096fcbbf4c6ca6613317ff2579ac67272b2d37caab1bbcfb7d360112" exitCode=0 Apr 16 18:18:01.321309 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:01.321189 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-77968cf4b8-qdr99" Apr 16 18:18:01.321309 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:01.321178 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-77968cf4b8-qdr99" event={"ID":"530f1448-da66-4015-8ce1-e8ea6e692cea","Type":"ContainerDied","Data":"de9a6b30096fcbbf4c6ca6613317ff2579ac67272b2d37caab1bbcfb7d360112"} Apr 16 18:18:01.321429 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:01.321314 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-77968cf4b8-qdr99" event={"ID":"530f1448-da66-4015-8ce1-e8ea6e692cea","Type":"ContainerDied","Data":"94f929c06043dbe466a8ad14e0af0706b1574c0a490e5fb57c4936d75cf6149d"} Apr 16 18:18:01.321429 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:01.321337 2571 scope.go:117] "RemoveContainer" containerID="de9a6b30096fcbbf4c6ca6613317ff2579ac67272b2d37caab1bbcfb7d360112" Apr 16 18:18:01.329130 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:01.329110 2571 scope.go:117] "RemoveContainer" containerID="de9a6b30096fcbbf4c6ca6613317ff2579ac67272b2d37caab1bbcfb7d360112" Apr 16 18:18:01.329382 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:18:01.329363 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de9a6b30096fcbbf4c6ca6613317ff2579ac67272b2d37caab1bbcfb7d360112\": container with ID starting with de9a6b30096fcbbf4c6ca6613317ff2579ac67272b2d37caab1bbcfb7d360112 not found: ID does not exist" containerID="de9a6b30096fcbbf4c6ca6613317ff2579ac67272b2d37caab1bbcfb7d360112" Apr 16 18:18:01.329442 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:01.329387 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de9a6b30096fcbbf4c6ca6613317ff2579ac67272b2d37caab1bbcfb7d360112"} err="failed to get container status \"de9a6b30096fcbbf4c6ca6613317ff2579ac67272b2d37caab1bbcfb7d360112\": rpc error: code = NotFound desc = could not find container \"de9a6b30096fcbbf4c6ca6613317ff2579ac67272b2d37caab1bbcfb7d360112\": container with ID starting with de9a6b30096fcbbf4c6ca6613317ff2579ac67272b2d37caab1bbcfb7d360112 not found: ID does not exist" Apr 16 18:18:01.336771 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:01.336746 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/model-chainer-77968cf4b8-qdr99"] Apr 16 18:18:01.340729 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:01.340708 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/model-chainer-77968cf4b8-qdr99"] Apr 16 18:18:02.231317 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:02.231274 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-0403e-predictor-5bfb47d58c-d59t9" podUID="3993f680-7849-4f19-9b81-85dd3c228e72" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.25:8080: connect: connection refused" Apr 16 18:18:02.714651 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:02.714584 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="530f1448-da66-4015-8ce1-e8ea6e692cea" path="/var/lib/kubelet/pods/530f1448-da66-4015-8ce1-e8ea6e692cea/volumes" Apr 16 18:18:10.822547 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:10.822513 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/switch-graph-04c3d-b98467bdb-ssm57"] Apr 16 18:18:10.822955 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:10.822836 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="37b7bbd3-8d23-42c5-b721-78044f2e91a8" containerName="switch-graph-390f9" Apr 16 18:18:10.822955 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:10.822847 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="37b7bbd3-8d23-42c5-b721-78044f2e91a8" containerName="switch-graph-390f9" Apr 16 18:18:10.822955 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:10.822868 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="530f1448-da66-4015-8ce1-e8ea6e692cea" containerName="model-chainer" Apr 16 18:18:10.822955 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:10.822874 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="530f1448-da66-4015-8ce1-e8ea6e692cea" containerName="model-chainer" Apr 16 18:18:10.822955 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:10.822917 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="530f1448-da66-4015-8ce1-e8ea6e692cea" containerName="model-chainer" Apr 16 18:18:10.822955 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:10.822926 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="37b7bbd3-8d23-42c5-b721-78044f2e91a8" containerName="switch-graph-390f9" Apr 16 18:18:10.825775 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:10.825758 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-04c3d-b98467bdb-ssm57" Apr 16 18:18:10.828027 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:10.828001 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"switch-graph-04c3d-serving-cert\"" Apr 16 18:18:10.828027 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:10.828017 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"openshift-service-ca.crt\"" Apr 16 18:18:10.828196 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:10.828033 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"switch-graph-04c3d-kube-rbac-proxy-sar-config\"" Apr 16 18:18:10.833957 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:10.833935 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/switch-graph-04c3d-b98467bdb-ssm57"] Apr 16 18:18:10.978519 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:10.978475 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e559e62e-8fb8-4de9-9ecf-54ca36d53802-proxy-tls\") pod \"switch-graph-04c3d-b98467bdb-ssm57\" (UID: \"e559e62e-8fb8-4de9-9ecf-54ca36d53802\") " pod="kserve-ci-e2e-test/switch-graph-04c3d-b98467bdb-ssm57" Apr 16 18:18:10.978519 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:10.978522 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e559e62e-8fb8-4de9-9ecf-54ca36d53802-openshift-service-ca-bundle\") pod \"switch-graph-04c3d-b98467bdb-ssm57\" (UID: \"e559e62e-8fb8-4de9-9ecf-54ca36d53802\") " pod="kserve-ci-e2e-test/switch-graph-04c3d-b98467bdb-ssm57" Apr 16 18:18:11.079444 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:11.079356 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e559e62e-8fb8-4de9-9ecf-54ca36d53802-proxy-tls\") pod \"switch-graph-04c3d-b98467bdb-ssm57\" (UID: \"e559e62e-8fb8-4de9-9ecf-54ca36d53802\") " pod="kserve-ci-e2e-test/switch-graph-04c3d-b98467bdb-ssm57" Apr 16 18:18:11.079444 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:11.079407 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e559e62e-8fb8-4de9-9ecf-54ca36d53802-openshift-service-ca-bundle\") pod \"switch-graph-04c3d-b98467bdb-ssm57\" (UID: \"e559e62e-8fb8-4de9-9ecf-54ca36d53802\") " pod="kserve-ci-e2e-test/switch-graph-04c3d-b98467bdb-ssm57" Apr 16 18:18:11.080060 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:11.080035 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e559e62e-8fb8-4de9-9ecf-54ca36d53802-openshift-service-ca-bundle\") pod \"switch-graph-04c3d-b98467bdb-ssm57\" (UID: \"e559e62e-8fb8-4de9-9ecf-54ca36d53802\") " pod="kserve-ci-e2e-test/switch-graph-04c3d-b98467bdb-ssm57" Apr 16 18:18:11.081918 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:11.081899 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e559e62e-8fb8-4de9-9ecf-54ca36d53802-proxy-tls\") pod \"switch-graph-04c3d-b98467bdb-ssm57\" (UID: \"e559e62e-8fb8-4de9-9ecf-54ca36d53802\") " pod="kserve-ci-e2e-test/switch-graph-04c3d-b98467bdb-ssm57" Apr 16 18:18:11.136004 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:11.135957 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-04c3d-b98467bdb-ssm57" Apr 16 18:18:11.257957 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:11.257870 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/switch-graph-04c3d-b98467bdb-ssm57"] Apr 16 18:18:11.260217 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:18:11.260194 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode559e62e_8fb8_4de9_9ecf_54ca36d53802.slice/crio-79069eb9d25b3d084284aeb9e1fca096476cd2f25c3abadc7b9d5e0ec1361656 WatchSource:0}: Error finding container 79069eb9d25b3d084284aeb9e1fca096476cd2f25c3abadc7b9d5e0ec1361656: Status 404 returned error can't find the container with id 79069eb9d25b3d084284aeb9e1fca096476cd2f25c3abadc7b9d5e0ec1361656 Apr 16 18:18:11.354545 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:11.354506 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-04c3d-b98467bdb-ssm57" event={"ID":"e559e62e-8fb8-4de9-9ecf-54ca36d53802","Type":"ContainerStarted","Data":"3409828d3d338450b75dfbc917bfa8a062b558c1460d8c09bd65db267eff4255"} Apr 16 18:18:11.354545 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:11.354540 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-04c3d-b98467bdb-ssm57" event={"ID":"e559e62e-8fb8-4de9-9ecf-54ca36d53802","Type":"ContainerStarted","Data":"79069eb9d25b3d084284aeb9e1fca096476cd2f25c3abadc7b9d5e0ec1361656"} Apr 16 18:18:11.354802 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:11.354640 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/switch-graph-04c3d-b98467bdb-ssm57" Apr 16 18:18:11.375469 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:11.375426 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/switch-graph-04c3d-b98467bdb-ssm57" podStartSLOduration=1.3754125099999999 podStartE2EDuration="1.37541251s" podCreationTimestamp="2026-04-16 18:18:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:18:11.373782782 +0000 UTC m=+949.202745916" watchObservedRunningTime="2026-04-16 18:18:11.37541251 +0000 UTC m=+949.204375642" Apr 16 18:18:12.230919 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:12.230875 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-0403e-predictor-5bfb47d58c-d59t9" podUID="3993f680-7849-4f19-9b81-85dd3c228e72" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.25:8080: connect: connection refused" Apr 16 18:18:17.364781 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:17.364746 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/switch-graph-04c3d-b98467bdb-ssm57" Apr 16 18:18:22.232537 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:22.232508 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-0403e-predictor-5bfb47d58c-d59t9" Apr 16 18:18:40.601180 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:40.601102 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/sequence-graph-0403e-6dd9d7fbd6-55qr5"] Apr 16 18:18:40.604166 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:40.604145 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-0403e-6dd9d7fbd6-55qr5" Apr 16 18:18:40.606276 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:40.606256 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"sequence-graph-0403e-kube-rbac-proxy-sar-config\"" Apr 16 18:18:40.606276 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:40.606270 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"sequence-graph-0403e-serving-cert\"" Apr 16 18:18:40.606628 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:40.606608 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce6c77c0-51fa-4864-87f4-072aae4ca35e-proxy-tls\") pod \"sequence-graph-0403e-6dd9d7fbd6-55qr5\" (UID: \"ce6c77c0-51fa-4864-87f4-072aae4ca35e\") " pod="kserve-ci-e2e-test/sequence-graph-0403e-6dd9d7fbd6-55qr5" Apr 16 18:18:40.606686 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:40.606674 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce6c77c0-51fa-4864-87f4-072aae4ca35e-openshift-service-ca-bundle\") pod \"sequence-graph-0403e-6dd9d7fbd6-55qr5\" (UID: \"ce6c77c0-51fa-4864-87f4-072aae4ca35e\") " pod="kserve-ci-e2e-test/sequence-graph-0403e-6dd9d7fbd6-55qr5" Apr 16 18:18:40.613820 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:40.613796 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-0403e-6dd9d7fbd6-55qr5"] Apr 16 18:18:40.707023 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:40.706992 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce6c77c0-51fa-4864-87f4-072aae4ca35e-proxy-tls\") pod \"sequence-graph-0403e-6dd9d7fbd6-55qr5\" (UID: \"ce6c77c0-51fa-4864-87f4-072aae4ca35e\") " pod="kserve-ci-e2e-test/sequence-graph-0403e-6dd9d7fbd6-55qr5" Apr 16 18:18:40.707198 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:40.707040 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce6c77c0-51fa-4864-87f4-072aae4ca35e-openshift-service-ca-bundle\") pod \"sequence-graph-0403e-6dd9d7fbd6-55qr5\" (UID: \"ce6c77c0-51fa-4864-87f4-072aae4ca35e\") " pod="kserve-ci-e2e-test/sequence-graph-0403e-6dd9d7fbd6-55qr5" Apr 16 18:18:40.707611 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:40.707568 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce6c77c0-51fa-4864-87f4-072aae4ca35e-openshift-service-ca-bundle\") pod \"sequence-graph-0403e-6dd9d7fbd6-55qr5\" (UID: \"ce6c77c0-51fa-4864-87f4-072aae4ca35e\") " pod="kserve-ci-e2e-test/sequence-graph-0403e-6dd9d7fbd6-55qr5" Apr 16 18:18:40.709419 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:40.709391 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce6c77c0-51fa-4864-87f4-072aae4ca35e-proxy-tls\") pod \"sequence-graph-0403e-6dd9d7fbd6-55qr5\" (UID: \"ce6c77c0-51fa-4864-87f4-072aae4ca35e\") " pod="kserve-ci-e2e-test/sequence-graph-0403e-6dd9d7fbd6-55qr5" Apr 16 18:18:40.915299 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:40.915189 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-0403e-6dd9d7fbd6-55qr5" Apr 16 18:18:41.031747 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:41.031718 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-0403e-6dd9d7fbd6-55qr5"] Apr 16 18:18:41.035572 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:18:41.035543 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce6c77c0_51fa_4864_87f4_072aae4ca35e.slice/crio-7a35b2688410779ae1a9af4991152d88141b485c9cdecda699c35f337c5c04d9 WatchSource:0}: Error finding container 7a35b2688410779ae1a9af4991152d88141b485c9cdecda699c35f337c5c04d9: Status 404 returned error can't find the container with id 7a35b2688410779ae1a9af4991152d88141b485c9cdecda699c35f337c5c04d9 Apr 16 18:18:41.439211 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:41.439175 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-0403e-6dd9d7fbd6-55qr5" event={"ID":"ce6c77c0-51fa-4864-87f4-072aae4ca35e","Type":"ContainerStarted","Data":"741889ea253f37eebd48b761a0440969dad26892ed9a85fdcd5d7856f20c1533"} Apr 16 18:18:41.439211 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:41.439217 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-0403e-6dd9d7fbd6-55qr5" event={"ID":"ce6c77c0-51fa-4864-87f4-072aae4ca35e","Type":"ContainerStarted","Data":"7a35b2688410779ae1a9af4991152d88141b485c9cdecda699c35f337c5c04d9"} Apr 16 18:18:41.439436 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:41.439250 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/sequence-graph-0403e-6dd9d7fbd6-55qr5" Apr 16 18:18:41.456480 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:41.456433 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/sequence-graph-0403e-6dd9d7fbd6-55qr5" podStartSLOduration=1.456418661 podStartE2EDuration="1.456418661s" podCreationTimestamp="2026-04-16 18:18:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:18:41.454815744 +0000 UTC m=+979.283778878" watchObservedRunningTime="2026-04-16 18:18:41.456418661 +0000 UTC m=+979.285381792" Apr 16 18:18:47.447740 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:18:47.447711 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/sequence-graph-0403e-6dd9d7fbd6-55qr5" Apr 16 18:26:25.541897 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:25.541805 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/switch-graph-04c3d-b98467bdb-ssm57"] Apr 16 18:26:25.542427 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:25.542123 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/switch-graph-04c3d-b98467bdb-ssm57" podUID="e559e62e-8fb8-4de9-9ecf-54ca36d53802" containerName="switch-graph-04c3d" containerID="cri-o://3409828d3d338450b75dfbc917bfa8a062b558c1460d8c09bd65db267eff4255" gracePeriod=30 Apr 16 18:26:25.678447 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:25.678411 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-04c3d-predictor-6c8787dd55-x9kf9"] Apr 16 18:26:25.678795 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:25.678741 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-04c3d-predictor-6c8787dd55-x9kf9" podUID="61350c93-dd52-4dca-b279-01c128b89c20" containerName="kserve-container" containerID="cri-o://7d713d3b626c468d1d3b83aeff5689833df1789ae87ce7ae32d8cfbeea23ca76" gracePeriod=30 Apr 16 18:26:25.743270 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:25.743241 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-ccdbb-predictor-697459d95f-7kx24"] Apr 16 18:26:25.746737 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:25.746702 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-ccdbb-predictor-697459d95f-7kx24" Apr 16 18:26:25.759155 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:25.759135 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-ccdbb-predictor-697459d95f-7kx24" Apr 16 18:26:25.765903 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:25.765875 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-ccdbb-predictor-697459d95f-7kx24"] Apr 16 18:26:25.893656 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:25.893622 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-ccdbb-predictor-697459d95f-7kx24"] Apr 16 18:26:25.896758 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:26:25.896721 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda50cbc4a_a789_49ff_827a_96113af18a02.slice/crio-2f94143ec3403618d803a7bc852e0a53ba09b749df9a290218852610cc4ab364 WatchSource:0}: Error finding container 2f94143ec3403618d803a7bc852e0a53ba09b749df9a290218852610cc4ab364: Status 404 returned error can't find the container with id 2f94143ec3403618d803a7bc852e0a53ba09b749df9a290218852610cc4ab364 Apr 16 18:26:25.898459 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:25.898443 2571 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 18:26:26.753574 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:26.753532 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-ccdbb-predictor-697459d95f-7kx24" event={"ID":"a50cbc4a-a789-49ff-827a-96113af18a02","Type":"ContainerStarted","Data":"4e75f50ca902eefc8667e0a4b1ed172b084145fb3523736db7e8ebebc8f86aaa"} Apr 16 18:26:26.753574 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:26.753575 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-ccdbb-predictor-697459d95f-7kx24" event={"ID":"a50cbc4a-a789-49ff-827a-96113af18a02","Type":"ContainerStarted","Data":"2f94143ec3403618d803a7bc852e0a53ba09b749df9a290218852610cc4ab364"} Apr 16 18:26:26.753979 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:26.753868 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-ccdbb-predictor-697459d95f-7kx24" Apr 16 18:26:26.755094 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:26.755060 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-ccdbb-predictor-697459d95f-7kx24" podUID="a50cbc4a-a789-49ff-827a-96113af18a02" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.28:8080: connect: connection refused" Apr 16 18:26:26.769911 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:26.769867 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-ccdbb-predictor-697459d95f-7kx24" podStartSLOduration=1.7698550910000002 podStartE2EDuration="1.769855091s" podCreationTimestamp="2026-04-16 18:26:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:26:26.769218726 +0000 UTC m=+1444.598181859" watchObservedRunningTime="2026-04-16 18:26:26.769855091 +0000 UTC m=+1444.598818225" Apr 16 18:26:27.362645 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:27.362585 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-04c3d-b98467bdb-ssm57" podUID="e559e62e-8fb8-4de9-9ecf-54ca36d53802" containerName="switch-graph-04c3d" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:26:27.756979 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:27.756897 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-ccdbb-predictor-697459d95f-7kx24" podUID="a50cbc4a-a789-49ff-827a-96113af18a02" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.28:8080: connect: connection refused" Apr 16 18:26:28.724457 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:28.724429 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-04c3d-predictor-6c8787dd55-x9kf9" Apr 16 18:26:28.761638 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:28.761578 2571 generic.go:358] "Generic (PLEG): container finished" podID="61350c93-dd52-4dca-b279-01c128b89c20" containerID="7d713d3b626c468d1d3b83aeff5689833df1789ae87ce7ae32d8cfbeea23ca76" exitCode=0 Apr 16 18:26:28.762056 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:28.761645 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-04c3d-predictor-6c8787dd55-x9kf9" Apr 16 18:26:28.762056 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:28.761658 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-04c3d-predictor-6c8787dd55-x9kf9" event={"ID":"61350c93-dd52-4dca-b279-01c128b89c20","Type":"ContainerDied","Data":"7d713d3b626c468d1d3b83aeff5689833df1789ae87ce7ae32d8cfbeea23ca76"} Apr 16 18:26:28.762056 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:28.761697 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-04c3d-predictor-6c8787dd55-x9kf9" event={"ID":"61350c93-dd52-4dca-b279-01c128b89c20","Type":"ContainerDied","Data":"7129be752e8622ded091e0d13d1b5ef1cb9434430161f3e3450a653b1a061016"} Apr 16 18:26:28.762056 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:28.761718 2571 scope.go:117] "RemoveContainer" containerID="7d713d3b626c468d1d3b83aeff5689833df1789ae87ce7ae32d8cfbeea23ca76" Apr 16 18:26:28.770416 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:28.770397 2571 scope.go:117] "RemoveContainer" containerID="7d713d3b626c468d1d3b83aeff5689833df1789ae87ce7ae32d8cfbeea23ca76" Apr 16 18:26:28.770724 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:26:28.770705 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d713d3b626c468d1d3b83aeff5689833df1789ae87ce7ae32d8cfbeea23ca76\": container with ID starting with 7d713d3b626c468d1d3b83aeff5689833df1789ae87ce7ae32d8cfbeea23ca76 not found: ID does not exist" containerID="7d713d3b626c468d1d3b83aeff5689833df1789ae87ce7ae32d8cfbeea23ca76" Apr 16 18:26:28.770828 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:28.770731 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d713d3b626c468d1d3b83aeff5689833df1789ae87ce7ae32d8cfbeea23ca76"} err="failed to get container status \"7d713d3b626c468d1d3b83aeff5689833df1789ae87ce7ae32d8cfbeea23ca76\": rpc error: code = NotFound desc = could not find container \"7d713d3b626c468d1d3b83aeff5689833df1789ae87ce7ae32d8cfbeea23ca76\": container with ID starting with 7d713d3b626c468d1d3b83aeff5689833df1789ae87ce7ae32d8cfbeea23ca76 not found: ID does not exist" Apr 16 18:26:28.781880 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:28.781859 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-04c3d-predictor-6c8787dd55-x9kf9"] Apr 16 18:26:28.787906 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:28.787888 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-04c3d-predictor-6c8787dd55-x9kf9"] Apr 16 18:26:30.714058 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:30.714027 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61350c93-dd52-4dca-b279-01c128b89c20" path="/var/lib/kubelet/pods/61350c93-dd52-4dca-b279-01c128b89c20/volumes" Apr 16 18:26:32.362547 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:32.362507 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-04c3d-b98467bdb-ssm57" podUID="e559e62e-8fb8-4de9-9ecf-54ca36d53802" containerName="switch-graph-04c3d" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:26:37.362747 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:37.362700 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-04c3d-b98467bdb-ssm57" podUID="e559e62e-8fb8-4de9-9ecf-54ca36d53802" containerName="switch-graph-04c3d" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:26:37.363119 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:37.362814 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/switch-graph-04c3d-b98467bdb-ssm57" Apr 16 18:26:37.757326 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:37.757232 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-ccdbb-predictor-697459d95f-7kx24" podUID="a50cbc4a-a789-49ff-827a-96113af18a02" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.28:8080: connect: connection refused" Apr 16 18:26:42.362372 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:42.362333 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-04c3d-b98467bdb-ssm57" podUID="e559e62e-8fb8-4de9-9ecf-54ca36d53802" containerName="switch-graph-04c3d" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:26:47.362161 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:47.362113 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-04c3d-b98467bdb-ssm57" podUID="e559e62e-8fb8-4de9-9ecf-54ca36d53802" containerName="switch-graph-04c3d" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:26:47.758008 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:47.757917 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-ccdbb-predictor-697459d95f-7kx24" podUID="a50cbc4a-a789-49ff-827a-96113af18a02" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.28:8080: connect: connection refused" Apr 16 18:26:52.363094 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:52.363057 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-04c3d-b98467bdb-ssm57" podUID="e559e62e-8fb8-4de9-9ecf-54ca36d53802" containerName="switch-graph-04c3d" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:26:55.426493 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:55.426458 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-0403e-6dd9d7fbd6-55qr5"] Apr 16 18:26:55.427011 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:55.426743 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/sequence-graph-0403e-6dd9d7fbd6-55qr5" podUID="ce6c77c0-51fa-4864-87f4-072aae4ca35e" containerName="sequence-graph-0403e" containerID="cri-o://741889ea253f37eebd48b761a0440969dad26892ed9a85fdcd5d7856f20c1533" gracePeriod=30 Apr 16 18:26:55.531988 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:55.531954 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-0403e-predictor-5bfb47d58c-d59t9"] Apr 16 18:26:55.532278 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:55.532249 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-0403e-predictor-5bfb47d58c-d59t9" podUID="3993f680-7849-4f19-9b81-85dd3c228e72" containerName="kserve-container" containerID="cri-o://0024ff54a96b4f5878fa53fc0bb9f4c40c7c7d2aa513703bd60b0bad6b045b78" gracePeriod=30 Apr 16 18:26:55.543267 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:55.543240 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-516af-predictor-7dc5688b5-z5fmq"] Apr 16 18:26:55.543560 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:55.543549 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="61350c93-dd52-4dca-b279-01c128b89c20" containerName="kserve-container" Apr 16 18:26:55.543633 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:55.543562 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="61350c93-dd52-4dca-b279-01c128b89c20" containerName="kserve-container" Apr 16 18:26:55.543675 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:55.543640 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="61350c93-dd52-4dca-b279-01c128b89c20" containerName="kserve-container" Apr 16 18:26:55.546525 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:55.546507 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-516af-predictor-7dc5688b5-z5fmq" Apr 16 18:26:55.556800 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:55.556772 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-516af-predictor-7dc5688b5-z5fmq"] Apr 16 18:26:55.604622 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:55.604578 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-516af-predictor-7dc5688b5-z5fmq" Apr 16 18:26:55.719041 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:55.718841 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-04c3d-b98467bdb-ssm57" Apr 16 18:26:55.750411 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:55.750389 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-516af-predictor-7dc5688b5-z5fmq"] Apr 16 18:26:55.752840 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:26:55.752810 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode07116fe_c239_43f6_beac_013fd89bbf35.slice/crio-5fde49d36fa1dc46619932b062e60d1d0258dcdf203ffe22576f197f5b955579 WatchSource:0}: Error finding container 5fde49d36fa1dc46619932b062e60d1d0258dcdf203ffe22576f197f5b955579: Status 404 returned error can't find the container with id 5fde49d36fa1dc46619932b062e60d1d0258dcdf203ffe22576f197f5b955579 Apr 16 18:26:55.831780 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:55.831756 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e559e62e-8fb8-4de9-9ecf-54ca36d53802-openshift-service-ca-bundle\") pod \"e559e62e-8fb8-4de9-9ecf-54ca36d53802\" (UID: \"e559e62e-8fb8-4de9-9ecf-54ca36d53802\") " Apr 16 18:26:55.831889 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:55.831834 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e559e62e-8fb8-4de9-9ecf-54ca36d53802-proxy-tls\") pod \"e559e62e-8fb8-4de9-9ecf-54ca36d53802\" (UID: \"e559e62e-8fb8-4de9-9ecf-54ca36d53802\") " Apr 16 18:26:55.832097 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:55.832072 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e559e62e-8fb8-4de9-9ecf-54ca36d53802-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "e559e62e-8fb8-4de9-9ecf-54ca36d53802" (UID: "e559e62e-8fb8-4de9-9ecf-54ca36d53802"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 18:26:55.833678 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:55.833658 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e559e62e-8fb8-4de9-9ecf-54ca36d53802-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e559e62e-8fb8-4de9-9ecf-54ca36d53802" (UID: "e559e62e-8fb8-4de9-9ecf-54ca36d53802"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 18:26:55.840596 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:55.840563 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-516af-predictor-7dc5688b5-z5fmq" event={"ID":"e07116fe-c239-43f6-beac-013fd89bbf35","Type":"ContainerStarted","Data":"34c4a18aa99b7679852987cffd26d42bfcfb9d5e5e6e64333e600b131cc9ff0f"} Apr 16 18:26:55.840693 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:55.840617 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-516af-predictor-7dc5688b5-z5fmq" event={"ID":"e07116fe-c239-43f6-beac-013fd89bbf35","Type":"ContainerStarted","Data":"5fde49d36fa1dc46619932b062e60d1d0258dcdf203ffe22576f197f5b955579"} Apr 16 18:26:55.840830 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:55.840809 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-516af-predictor-7dc5688b5-z5fmq" Apr 16 18:26:55.841816 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:55.841780 2571 generic.go:358] "Generic (PLEG): container finished" podID="e559e62e-8fb8-4de9-9ecf-54ca36d53802" containerID="3409828d3d338450b75dfbc917bfa8a062b558c1460d8c09bd65db267eff4255" exitCode=0 Apr 16 18:26:55.841898 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:55.841842 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-04c3d-b98467bdb-ssm57" event={"ID":"e559e62e-8fb8-4de9-9ecf-54ca36d53802","Type":"ContainerDied","Data":"3409828d3d338450b75dfbc917bfa8a062b558c1460d8c09bd65db267eff4255"} Apr 16 18:26:55.841898 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:55.841888 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-04c3d-b98467bdb-ssm57" event={"ID":"e559e62e-8fb8-4de9-9ecf-54ca36d53802","Type":"ContainerDied","Data":"79069eb9d25b3d084284aeb9e1fca096476cd2f25c3abadc7b9d5e0ec1361656"} Apr 16 18:26:55.841973 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:55.841891 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-04c3d-b98467bdb-ssm57" Apr 16 18:26:55.841973 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:55.841918 2571 scope.go:117] "RemoveContainer" containerID="3409828d3d338450b75dfbc917bfa8a062b558c1460d8c09bd65db267eff4255" Apr 16 18:26:55.841973 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:55.841885 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-516af-predictor-7dc5688b5-z5fmq" podUID="e07116fe-c239-43f6-beac-013fd89bbf35" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.29:8080: connect: connection refused" Apr 16 18:26:55.850315 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:55.850296 2571 scope.go:117] "RemoveContainer" containerID="3409828d3d338450b75dfbc917bfa8a062b558c1460d8c09bd65db267eff4255" Apr 16 18:26:55.850564 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:26:55.850544 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3409828d3d338450b75dfbc917bfa8a062b558c1460d8c09bd65db267eff4255\": container with ID starting with 3409828d3d338450b75dfbc917bfa8a062b558c1460d8c09bd65db267eff4255 not found: ID does not exist" containerID="3409828d3d338450b75dfbc917bfa8a062b558c1460d8c09bd65db267eff4255" Apr 16 18:26:55.850684 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:55.850575 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3409828d3d338450b75dfbc917bfa8a062b558c1460d8c09bd65db267eff4255"} err="failed to get container status \"3409828d3d338450b75dfbc917bfa8a062b558c1460d8c09bd65db267eff4255\": rpc error: code = NotFound desc = could not find container \"3409828d3d338450b75dfbc917bfa8a062b558c1460d8c09bd65db267eff4255\": container with ID starting with 3409828d3d338450b75dfbc917bfa8a062b558c1460d8c09bd65db267eff4255 not found: ID does not exist" Apr 16 18:26:55.855353 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:55.855310 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-516af-predictor-7dc5688b5-z5fmq" podStartSLOduration=0.855298671 podStartE2EDuration="855.298671ms" podCreationTimestamp="2026-04-16 18:26:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:26:55.854365653 +0000 UTC m=+1473.683328797" watchObservedRunningTime="2026-04-16 18:26:55.855298671 +0000 UTC m=+1473.684261807" Apr 16 18:26:55.868909 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:55.868880 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/switch-graph-04c3d-b98467bdb-ssm57"] Apr 16 18:26:55.872107 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:55.872087 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/switch-graph-04c3d-b98467bdb-ssm57"] Apr 16 18:26:55.933067 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:55.933038 2571 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e559e62e-8fb8-4de9-9ecf-54ca36d53802-proxy-tls\") on node \"ip-10-0-128-59.ec2.internal\" DevicePath \"\"" Apr 16 18:26:55.933067 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:55.933064 2571 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e559e62e-8fb8-4de9-9ecf-54ca36d53802-openshift-service-ca-bundle\") on node \"ip-10-0-128-59.ec2.internal\" DevicePath \"\"" Apr 16 18:26:56.716478 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:56.716441 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e559e62e-8fb8-4de9-9ecf-54ca36d53802" path="/var/lib/kubelet/pods/e559e62e-8fb8-4de9-9ecf-54ca36d53802/volumes" Apr 16 18:26:56.845677 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:56.845639 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-516af-predictor-7dc5688b5-z5fmq" podUID="e07116fe-c239-43f6-beac-013fd89bbf35" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.29:8080: connect: connection refused" Apr 16 18:26:57.446364 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:57.446325 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-0403e-6dd9d7fbd6-55qr5" podUID="ce6c77c0-51fa-4864-87f4-072aae4ca35e" containerName="sequence-graph-0403e" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:26:57.757606 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:57.757498 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-ccdbb-predictor-697459d95f-7kx24" podUID="a50cbc4a-a789-49ff-827a-96113af18a02" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.28:8080: connect: connection refused" Apr 16 18:26:58.578627 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:58.578603 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-0403e-predictor-5bfb47d58c-d59t9" Apr 16 18:26:58.852539 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:58.852506 2571 generic.go:358] "Generic (PLEG): container finished" podID="3993f680-7849-4f19-9b81-85dd3c228e72" containerID="0024ff54a96b4f5878fa53fc0bb9f4c40c7c7d2aa513703bd60b0bad6b045b78" exitCode=0 Apr 16 18:26:58.852908 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:58.852571 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-0403e-predictor-5bfb47d58c-d59t9" Apr 16 18:26:58.852908 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:58.852606 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-0403e-predictor-5bfb47d58c-d59t9" event={"ID":"3993f680-7849-4f19-9b81-85dd3c228e72","Type":"ContainerDied","Data":"0024ff54a96b4f5878fa53fc0bb9f4c40c7c7d2aa513703bd60b0bad6b045b78"} Apr 16 18:26:58.852908 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:58.852646 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-0403e-predictor-5bfb47d58c-d59t9" event={"ID":"3993f680-7849-4f19-9b81-85dd3c228e72","Type":"ContainerDied","Data":"5ee097ccd6f8ed918a0954318d2e0059a4e473327f2814499b3b7f02f876a6e4"} Apr 16 18:26:58.852908 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:58.852667 2571 scope.go:117] "RemoveContainer" containerID="0024ff54a96b4f5878fa53fc0bb9f4c40c7c7d2aa513703bd60b0bad6b045b78" Apr 16 18:26:58.860092 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:58.860076 2571 scope.go:117] "RemoveContainer" containerID="0024ff54a96b4f5878fa53fc0bb9f4c40c7c7d2aa513703bd60b0bad6b045b78" Apr 16 18:26:58.860344 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:26:58.860324 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0024ff54a96b4f5878fa53fc0bb9f4c40c7c7d2aa513703bd60b0bad6b045b78\": container with ID starting with 0024ff54a96b4f5878fa53fc0bb9f4c40c7c7d2aa513703bd60b0bad6b045b78 not found: ID does not exist" containerID="0024ff54a96b4f5878fa53fc0bb9f4c40c7c7d2aa513703bd60b0bad6b045b78" Apr 16 18:26:58.860413 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:58.860353 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0024ff54a96b4f5878fa53fc0bb9f4c40c7c7d2aa513703bd60b0bad6b045b78"} err="failed to get container status \"0024ff54a96b4f5878fa53fc0bb9f4c40c7c7d2aa513703bd60b0bad6b045b78\": rpc error: code = NotFound desc = could not find container \"0024ff54a96b4f5878fa53fc0bb9f4c40c7c7d2aa513703bd60b0bad6b045b78\": container with ID starting with 0024ff54a96b4f5878fa53fc0bb9f4c40c7c7d2aa513703bd60b0bad6b045b78 not found: ID does not exist" Apr 16 18:26:58.868197 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:58.868175 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-0403e-predictor-5bfb47d58c-d59t9"] Apr 16 18:26:58.871519 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:26:58.871500 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-0403e-predictor-5bfb47d58c-d59t9"] Apr 16 18:27:00.715567 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:00.715526 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3993f680-7849-4f19-9b81-85dd3c228e72" path="/var/lib/kubelet/pods/3993f680-7849-4f19-9b81-85dd3c228e72/volumes" Apr 16 18:27:02.446774 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:02.446733 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-0403e-6dd9d7fbd6-55qr5" podUID="ce6c77c0-51fa-4864-87f4-072aae4ca35e" containerName="sequence-graph-0403e" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:27:06.846282 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:06.846235 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-516af-predictor-7dc5688b5-z5fmq" podUID="e07116fe-c239-43f6-beac-013fd89bbf35" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.29:8080: connect: connection refused" Apr 16 18:27:07.446916 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:07.446877 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-0403e-6dd9d7fbd6-55qr5" podUID="ce6c77c0-51fa-4864-87f4-072aae4ca35e" containerName="sequence-graph-0403e" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:27:07.447076 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:07.446990 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/sequence-graph-0403e-6dd9d7fbd6-55qr5" Apr 16 18:27:07.757367 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:07.757281 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-ccdbb-predictor-697459d95f-7kx24" podUID="a50cbc4a-a789-49ff-827a-96113af18a02" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.28:8080: connect: connection refused" Apr 16 18:27:12.446456 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:12.446411 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-0403e-6dd9d7fbd6-55qr5" podUID="ce6c77c0-51fa-4864-87f4-072aae4ca35e" containerName="sequence-graph-0403e" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:27:16.846176 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:16.846133 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-516af-predictor-7dc5688b5-z5fmq" podUID="e07116fe-c239-43f6-beac-013fd89bbf35" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.29:8080: connect: connection refused" Apr 16 18:27:17.446489 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:17.446455 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-0403e-6dd9d7fbd6-55qr5" podUID="ce6c77c0-51fa-4864-87f4-072aae4ca35e" containerName="sequence-graph-0403e" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:27:17.757801 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:17.757718 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-ccdbb-predictor-697459d95f-7kx24" Apr 16 18:27:22.446489 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:22.446444 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-0403e-6dd9d7fbd6-55qr5" podUID="ce6c77c0-51fa-4864-87f4-072aae4ca35e" containerName="sequence-graph-0403e" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:27:25.612094 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:25.612074 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-0403e-6dd9d7fbd6-55qr5" Apr 16 18:27:25.750629 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:25.750533 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce6c77c0-51fa-4864-87f4-072aae4ca35e-proxy-tls\") pod \"ce6c77c0-51fa-4864-87f4-072aae4ca35e\" (UID: \"ce6c77c0-51fa-4864-87f4-072aae4ca35e\") " Apr 16 18:27:25.750629 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:25.750618 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce6c77c0-51fa-4864-87f4-072aae4ca35e-openshift-service-ca-bundle\") pod \"ce6c77c0-51fa-4864-87f4-072aae4ca35e\" (UID: \"ce6c77c0-51fa-4864-87f4-072aae4ca35e\") " Apr 16 18:27:25.750957 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:25.750933 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce6c77c0-51fa-4864-87f4-072aae4ca35e-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "ce6c77c0-51fa-4864-87f4-072aae4ca35e" (UID: "ce6c77c0-51fa-4864-87f4-072aae4ca35e"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 18:27:25.752641 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:25.752612 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce6c77c0-51fa-4864-87f4-072aae4ca35e-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "ce6c77c0-51fa-4864-87f4-072aae4ca35e" (UID: "ce6c77c0-51fa-4864-87f4-072aae4ca35e"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 18:27:25.851200 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:25.851166 2571 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce6c77c0-51fa-4864-87f4-072aae4ca35e-proxy-tls\") on node \"ip-10-0-128-59.ec2.internal\" DevicePath \"\"" Apr 16 18:27:25.851200 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:25.851196 2571 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce6c77c0-51fa-4864-87f4-072aae4ca35e-openshift-service-ca-bundle\") on node \"ip-10-0-128-59.ec2.internal\" DevicePath \"\"" Apr 16 18:27:25.931380 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:25.931346 2571 generic.go:358] "Generic (PLEG): container finished" podID="ce6c77c0-51fa-4864-87f4-072aae4ca35e" containerID="741889ea253f37eebd48b761a0440969dad26892ed9a85fdcd5d7856f20c1533" exitCode=0 Apr 16 18:27:25.931512 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:25.931409 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-0403e-6dd9d7fbd6-55qr5" Apr 16 18:27:25.931512 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:25.931427 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-0403e-6dd9d7fbd6-55qr5" event={"ID":"ce6c77c0-51fa-4864-87f4-072aae4ca35e","Type":"ContainerDied","Data":"741889ea253f37eebd48b761a0440969dad26892ed9a85fdcd5d7856f20c1533"} Apr 16 18:27:25.931512 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:25.931463 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-0403e-6dd9d7fbd6-55qr5" event={"ID":"ce6c77c0-51fa-4864-87f4-072aae4ca35e","Type":"ContainerDied","Data":"7a35b2688410779ae1a9af4991152d88141b485c9cdecda699c35f337c5c04d9"} Apr 16 18:27:25.931512 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:25.931478 2571 scope.go:117] "RemoveContainer" containerID="741889ea253f37eebd48b761a0440969dad26892ed9a85fdcd5d7856f20c1533" Apr 16 18:27:25.942468 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:25.942447 2571 scope.go:117] "RemoveContainer" containerID="741889ea253f37eebd48b761a0440969dad26892ed9a85fdcd5d7856f20c1533" Apr 16 18:27:25.942725 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:27:25.942703 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"741889ea253f37eebd48b761a0440969dad26892ed9a85fdcd5d7856f20c1533\": container with ID starting with 741889ea253f37eebd48b761a0440969dad26892ed9a85fdcd5d7856f20c1533 not found: ID does not exist" containerID="741889ea253f37eebd48b761a0440969dad26892ed9a85fdcd5d7856f20c1533" Apr 16 18:27:25.942817 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:25.942729 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"741889ea253f37eebd48b761a0440969dad26892ed9a85fdcd5d7856f20c1533"} err="failed to get container status \"741889ea253f37eebd48b761a0440969dad26892ed9a85fdcd5d7856f20c1533\": rpc error: code = NotFound desc = could not find container \"741889ea253f37eebd48b761a0440969dad26892ed9a85fdcd5d7856f20c1533\": container with ID starting with 741889ea253f37eebd48b761a0440969dad26892ed9a85fdcd5d7856f20c1533 not found: ID does not exist" Apr 16 18:27:25.953994 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:25.953974 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-0403e-6dd9d7fbd6-55qr5"] Apr 16 18:27:25.957887 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:25.957867 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-0403e-6dd9d7fbd6-55qr5"] Apr 16 18:27:26.715289 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:26.715255 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce6c77c0-51fa-4864-87f4-072aae4ca35e" path="/var/lib/kubelet/pods/ce6c77c0-51fa-4864-87f4-072aae4ca35e/volumes" Apr 16 18:27:26.846204 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:26.846164 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-516af-predictor-7dc5688b5-z5fmq" podUID="e07116fe-c239-43f6-beac-013fd89bbf35" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.29:8080: connect: connection refused" Apr 16 18:27:35.782115 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:35.782034 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-ccdbb-59bfb89b8-8swsg"] Apr 16 18:27:35.782452 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:35.782369 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3993f680-7849-4f19-9b81-85dd3c228e72" containerName="kserve-container" Apr 16 18:27:35.782452 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:35.782382 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="3993f680-7849-4f19-9b81-85dd3c228e72" containerName="kserve-container" Apr 16 18:27:35.782452 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:35.782395 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e559e62e-8fb8-4de9-9ecf-54ca36d53802" containerName="switch-graph-04c3d" Apr 16 18:27:35.782452 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:35.782401 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="e559e62e-8fb8-4de9-9ecf-54ca36d53802" containerName="switch-graph-04c3d" Apr 16 18:27:35.782452 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:35.782412 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ce6c77c0-51fa-4864-87f4-072aae4ca35e" containerName="sequence-graph-0403e" Apr 16 18:27:35.782452 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:35.782417 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce6c77c0-51fa-4864-87f4-072aae4ca35e" containerName="sequence-graph-0403e" Apr 16 18:27:35.782654 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:35.782468 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="3993f680-7849-4f19-9b81-85dd3c228e72" containerName="kserve-container" Apr 16 18:27:35.782654 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:35.782478 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="e559e62e-8fb8-4de9-9ecf-54ca36d53802" containerName="switch-graph-04c3d" Apr 16 18:27:35.782654 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:35.782485 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="ce6c77c0-51fa-4864-87f4-072aae4ca35e" containerName="sequence-graph-0403e" Apr 16 18:27:35.786400 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:35.786380 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-ccdbb-59bfb89b8-8swsg" Apr 16 18:27:35.788423 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:35.788398 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"ensemble-graph-ccdbb-kube-rbac-proxy-sar-config\"" Apr 16 18:27:35.788560 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:35.788461 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"ensemble-graph-ccdbb-serving-cert\"" Apr 16 18:27:35.788560 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:35.788522 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"openshift-service-ca.crt\"" Apr 16 18:27:35.792412 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:35.792387 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-ccdbb-59bfb89b8-8swsg"] Apr 16 18:27:35.931978 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:35.931938 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c1406e5f-cf63-4a83-b38c-0513f18af20a-proxy-tls\") pod \"ensemble-graph-ccdbb-59bfb89b8-8swsg\" (UID: \"c1406e5f-cf63-4a83-b38c-0513f18af20a\") " pod="kserve-ci-e2e-test/ensemble-graph-ccdbb-59bfb89b8-8swsg" Apr 16 18:27:35.931978 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:35.931983 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1406e5f-cf63-4a83-b38c-0513f18af20a-openshift-service-ca-bundle\") pod \"ensemble-graph-ccdbb-59bfb89b8-8swsg\" (UID: \"c1406e5f-cf63-4a83-b38c-0513f18af20a\") " pod="kserve-ci-e2e-test/ensemble-graph-ccdbb-59bfb89b8-8swsg" Apr 16 18:27:36.033244 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:36.033158 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1406e5f-cf63-4a83-b38c-0513f18af20a-openshift-service-ca-bundle\") pod \"ensemble-graph-ccdbb-59bfb89b8-8swsg\" (UID: \"c1406e5f-cf63-4a83-b38c-0513f18af20a\") " pod="kserve-ci-e2e-test/ensemble-graph-ccdbb-59bfb89b8-8swsg" Apr 16 18:27:36.033382 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:36.033264 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c1406e5f-cf63-4a83-b38c-0513f18af20a-proxy-tls\") pod \"ensemble-graph-ccdbb-59bfb89b8-8swsg\" (UID: \"c1406e5f-cf63-4a83-b38c-0513f18af20a\") " pod="kserve-ci-e2e-test/ensemble-graph-ccdbb-59bfb89b8-8swsg" Apr 16 18:27:36.033382 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:27:36.033372 2571 secret.go:189] Couldn't get secret kserve-ci-e2e-test/ensemble-graph-ccdbb-serving-cert: secret "ensemble-graph-ccdbb-serving-cert" not found Apr 16 18:27:36.033464 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:27:36.033451 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1406e5f-cf63-4a83-b38c-0513f18af20a-proxy-tls podName:c1406e5f-cf63-4a83-b38c-0513f18af20a nodeName:}" failed. No retries permitted until 2026-04-16 18:27:36.533427935 +0000 UTC m=+1514.362391099 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c1406e5f-cf63-4a83-b38c-0513f18af20a-proxy-tls") pod "ensemble-graph-ccdbb-59bfb89b8-8swsg" (UID: "c1406e5f-cf63-4a83-b38c-0513f18af20a") : secret "ensemble-graph-ccdbb-serving-cert" not found Apr 16 18:27:36.033950 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:36.033931 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1406e5f-cf63-4a83-b38c-0513f18af20a-openshift-service-ca-bundle\") pod \"ensemble-graph-ccdbb-59bfb89b8-8swsg\" (UID: \"c1406e5f-cf63-4a83-b38c-0513f18af20a\") " pod="kserve-ci-e2e-test/ensemble-graph-ccdbb-59bfb89b8-8swsg" Apr 16 18:27:36.537564 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:36.537525 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c1406e5f-cf63-4a83-b38c-0513f18af20a-proxy-tls\") pod \"ensemble-graph-ccdbb-59bfb89b8-8swsg\" (UID: \"c1406e5f-cf63-4a83-b38c-0513f18af20a\") " pod="kserve-ci-e2e-test/ensemble-graph-ccdbb-59bfb89b8-8swsg" Apr 16 18:27:36.539957 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:36.539938 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c1406e5f-cf63-4a83-b38c-0513f18af20a-proxy-tls\") pod \"ensemble-graph-ccdbb-59bfb89b8-8swsg\" (UID: \"c1406e5f-cf63-4a83-b38c-0513f18af20a\") " pod="kserve-ci-e2e-test/ensemble-graph-ccdbb-59bfb89b8-8swsg" Apr 16 18:27:36.697952 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:36.697906 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-ccdbb-59bfb89b8-8swsg" Apr 16 18:27:36.816703 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:36.816679 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-ccdbb-59bfb89b8-8swsg"] Apr 16 18:27:36.817772 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:27:36.817739 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1406e5f_cf63_4a83_b38c_0513f18af20a.slice/crio-5dff887b68dcc01afcc002e6947bac5f8d3af4a1f1c89c05b31636b0130c494c WatchSource:0}: Error finding container 5dff887b68dcc01afcc002e6947bac5f8d3af4a1f1c89c05b31636b0130c494c: Status 404 returned error can't find the container with id 5dff887b68dcc01afcc002e6947bac5f8d3af4a1f1c89c05b31636b0130c494c Apr 16 18:27:36.845904 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:36.845876 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-516af-predictor-7dc5688b5-z5fmq" podUID="e07116fe-c239-43f6-beac-013fd89bbf35" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.29:8080: connect: connection refused" Apr 16 18:27:36.966004 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:36.965971 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-ccdbb-59bfb89b8-8swsg" event={"ID":"c1406e5f-cf63-4a83-b38c-0513f18af20a","Type":"ContainerStarted","Data":"426cb14f8c1441210261dd65098ea93e931da6ddce76b40591650ca605ef710d"} Apr 16 18:27:36.966004 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:36.966008 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-ccdbb-59bfb89b8-8swsg" event={"ID":"c1406e5f-cf63-4a83-b38c-0513f18af20a","Type":"ContainerStarted","Data":"5dff887b68dcc01afcc002e6947bac5f8d3af4a1f1c89c05b31636b0130c494c"} Apr 16 18:27:36.966217 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:36.966094 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/ensemble-graph-ccdbb-59bfb89b8-8swsg" Apr 16 18:27:36.981695 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:36.981647 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/ensemble-graph-ccdbb-59bfb89b8-8swsg" podStartSLOduration=1.981632858 podStartE2EDuration="1.981632858s" podCreationTimestamp="2026-04-16 18:27:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:27:36.980528134 +0000 UTC m=+1514.809491266" watchObservedRunningTime="2026-04-16 18:27:36.981632858 +0000 UTC m=+1514.810595993" Apr 16 18:27:42.974369 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:42.974342 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/ensemble-graph-ccdbb-59bfb89b8-8swsg" Apr 16 18:27:45.843972 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:45.843938 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-ccdbb-59bfb89b8-8swsg"] Apr 16 18:27:45.844314 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:45.844136 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/ensemble-graph-ccdbb-59bfb89b8-8swsg" podUID="c1406e5f-cf63-4a83-b38c-0513f18af20a" containerName="ensemble-graph-ccdbb" containerID="cri-o://426cb14f8c1441210261dd65098ea93e931da6ddce76b40591650ca605ef710d" gracePeriod=30 Apr 16 18:27:45.998316 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:45.998287 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-ccdbb-predictor-697459d95f-7kx24"] Apr 16 18:27:45.998545 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:45.998523 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-ccdbb-predictor-697459d95f-7kx24" podUID="a50cbc4a-a789-49ff-827a-96113af18a02" containerName="kserve-container" containerID="cri-o://4e75f50ca902eefc8667e0a4b1ed172b084145fb3523736db7e8ebebc8f86aaa" gracePeriod=30 Apr 16 18:27:46.015364 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:46.015332 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-df772-predictor-875748d7f-jdp2w"] Apr 16 18:27:46.024349 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:46.024322 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-df772-predictor-875748d7f-jdp2w" Apr 16 18:27:46.034729 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:46.034700 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-df772-predictor-875748d7f-jdp2w"] Apr 16 18:27:46.035764 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:46.035746 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-df772-predictor-875748d7f-jdp2w" Apr 16 18:27:46.169246 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:46.169221 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-df772-predictor-875748d7f-jdp2w"] Apr 16 18:27:46.171781 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:27:46.171753 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28dbc4e6_4622_4258_91ad_f18f9f5275ff.slice/crio-64c36e8cf093cba1eeb6e4055ed9ebe541f48d996c54d586c6dcbdd9398ff489 WatchSource:0}: Error finding container 64c36e8cf093cba1eeb6e4055ed9ebe541f48d996c54d586c6dcbdd9398ff489: Status 404 returned error can't find the container with id 64c36e8cf093cba1eeb6e4055ed9ebe541f48d996c54d586c6dcbdd9398ff489 Apr 16 18:27:46.847558 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:46.847531 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-516af-predictor-7dc5688b5-z5fmq" Apr 16 18:27:46.997158 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:46.997122 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-df772-predictor-875748d7f-jdp2w" event={"ID":"28dbc4e6-4622-4258-91ad-f18f9f5275ff","Type":"ContainerStarted","Data":"e2380fe4857e235a45d96ac4c45dcddf6875ab32b5a393fe6cc143d947902e1f"} Apr 16 18:27:46.997158 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:46.997157 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-df772-predictor-875748d7f-jdp2w" event={"ID":"28dbc4e6-4622-4258-91ad-f18f9f5275ff","Type":"ContainerStarted","Data":"64c36e8cf093cba1eeb6e4055ed9ebe541f48d996c54d586c6dcbdd9398ff489"} Apr 16 18:27:46.997394 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:46.997277 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-df772-predictor-875748d7f-jdp2w" Apr 16 18:27:46.998665 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:46.998638 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-df772-predictor-875748d7f-jdp2w" podUID="28dbc4e6-4622-4258-91ad-f18f9f5275ff" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.31:8080: connect: connection refused" Apr 16 18:27:47.014581 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:47.014538 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-df772-predictor-875748d7f-jdp2w" podStartSLOduration=2.014524645 podStartE2EDuration="2.014524645s" podCreationTimestamp="2026-04-16 18:27:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:27:47.013177015 +0000 UTC m=+1524.842140149" watchObservedRunningTime="2026-04-16 18:27:47.014524645 +0000 UTC m=+1524.843487778" Apr 16 18:27:47.757423 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:47.757379 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-ccdbb-predictor-697459d95f-7kx24" podUID="a50cbc4a-a789-49ff-827a-96113af18a02" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.28:8080: connect: connection refused" Apr 16 18:27:47.974657 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:47.974619 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-ccdbb-59bfb89b8-8swsg" podUID="c1406e5f-cf63-4a83-b38c-0513f18af20a" containerName="ensemble-graph-ccdbb" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:27:47.999789 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:47.999752 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-df772-predictor-875748d7f-jdp2w" podUID="28dbc4e6-4622-4258-91ad-f18f9f5275ff" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.31:8080: connect: connection refused" Apr 16 18:27:49.155339 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:49.155315 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-ccdbb-predictor-697459d95f-7kx24" Apr 16 18:27:50.006542 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:50.006508 2571 generic.go:358] "Generic (PLEG): container finished" podID="a50cbc4a-a789-49ff-827a-96113af18a02" containerID="4e75f50ca902eefc8667e0a4b1ed172b084145fb3523736db7e8ebebc8f86aaa" exitCode=0 Apr 16 18:27:50.006730 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:50.006574 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-ccdbb-predictor-697459d95f-7kx24" Apr 16 18:27:50.006730 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:50.006566 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-ccdbb-predictor-697459d95f-7kx24" event={"ID":"a50cbc4a-a789-49ff-827a-96113af18a02","Type":"ContainerDied","Data":"4e75f50ca902eefc8667e0a4b1ed172b084145fb3523736db7e8ebebc8f86aaa"} Apr 16 18:27:50.006730 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:50.006700 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-ccdbb-predictor-697459d95f-7kx24" event={"ID":"a50cbc4a-a789-49ff-827a-96113af18a02","Type":"ContainerDied","Data":"2f94143ec3403618d803a7bc852e0a53ba09b749df9a290218852610cc4ab364"} Apr 16 18:27:50.006730 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:50.006721 2571 scope.go:117] "RemoveContainer" containerID="4e75f50ca902eefc8667e0a4b1ed172b084145fb3523736db7e8ebebc8f86aaa" Apr 16 18:27:50.015149 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:50.015132 2571 scope.go:117] "RemoveContainer" containerID="4e75f50ca902eefc8667e0a4b1ed172b084145fb3523736db7e8ebebc8f86aaa" Apr 16 18:27:50.015409 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:27:50.015378 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e75f50ca902eefc8667e0a4b1ed172b084145fb3523736db7e8ebebc8f86aaa\": container with ID starting with 4e75f50ca902eefc8667e0a4b1ed172b084145fb3523736db7e8ebebc8f86aaa not found: ID does not exist" containerID="4e75f50ca902eefc8667e0a4b1ed172b084145fb3523736db7e8ebebc8f86aaa" Apr 16 18:27:50.015500 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:50.015405 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e75f50ca902eefc8667e0a4b1ed172b084145fb3523736db7e8ebebc8f86aaa"} err="failed to get container status \"4e75f50ca902eefc8667e0a4b1ed172b084145fb3523736db7e8ebebc8f86aaa\": rpc error: code = NotFound desc = could not find container \"4e75f50ca902eefc8667e0a4b1ed172b084145fb3523736db7e8ebebc8f86aaa\": container with ID starting with 4e75f50ca902eefc8667e0a4b1ed172b084145fb3523736db7e8ebebc8f86aaa not found: ID does not exist" Apr 16 18:27:50.025226 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:50.025203 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-ccdbb-predictor-697459d95f-7kx24"] Apr 16 18:27:50.029452 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:50.029431 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-ccdbb-predictor-697459d95f-7kx24"] Apr 16 18:27:50.715963 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:50.715932 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a50cbc4a-a789-49ff-827a-96113af18a02" path="/var/lib/kubelet/pods/a50cbc4a-a789-49ff-827a-96113af18a02/volumes" Apr 16 18:27:52.972119 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:52.972083 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-ccdbb-59bfb89b8-8swsg" podUID="c1406e5f-cf63-4a83-b38c-0513f18af20a" containerName="ensemble-graph-ccdbb" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:27:57.972910 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:57.972871 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-ccdbb-59bfb89b8-8swsg" podUID="c1406e5f-cf63-4a83-b38c-0513f18af20a" containerName="ensemble-graph-ccdbb" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:27:57.973416 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:57.972970 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/ensemble-graph-ccdbb-59bfb89b8-8swsg" Apr 16 18:27:57.999901 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:27:57.999863 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-df772-predictor-875748d7f-jdp2w" podUID="28dbc4e6-4622-4258-91ad-f18f9f5275ff" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.31:8080: connect: connection refused" Apr 16 18:28:02.972914 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:02.972863 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-ccdbb-59bfb89b8-8swsg" podUID="c1406e5f-cf63-4a83-b38c-0513f18af20a" containerName="ensemble-graph-ccdbb" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:28:05.591501 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:05.591470 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/sequence-graph-516af-7cdf98dc78-cwt5d"] Apr 16 18:28:05.591951 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:05.591854 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a50cbc4a-a789-49ff-827a-96113af18a02" containerName="kserve-container" Apr 16 18:28:05.591951 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:05.591872 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="a50cbc4a-a789-49ff-827a-96113af18a02" containerName="kserve-container" Apr 16 18:28:05.592064 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:05.591986 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="a50cbc4a-a789-49ff-827a-96113af18a02" containerName="kserve-container" Apr 16 18:28:05.596546 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:05.596529 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-516af-7cdf98dc78-cwt5d" Apr 16 18:28:05.599032 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:05.599015 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"sequence-graph-516af-kube-rbac-proxy-sar-config\"" Apr 16 18:28:05.599737 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:05.599720 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"sequence-graph-516af-serving-cert\"" Apr 16 18:28:05.603877 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:05.603858 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-516af-7cdf98dc78-cwt5d"] Apr 16 18:28:05.677465 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:05.677434 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af-proxy-tls\") pod \"sequence-graph-516af-7cdf98dc78-cwt5d\" (UID: \"c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af\") " pod="kserve-ci-e2e-test/sequence-graph-516af-7cdf98dc78-cwt5d" Apr 16 18:28:05.677642 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:05.677515 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af-openshift-service-ca-bundle\") pod \"sequence-graph-516af-7cdf98dc78-cwt5d\" (UID: \"c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af\") " pod="kserve-ci-e2e-test/sequence-graph-516af-7cdf98dc78-cwt5d" Apr 16 18:28:05.778212 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:05.778168 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af-proxy-tls\") pod \"sequence-graph-516af-7cdf98dc78-cwt5d\" (UID: \"c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af\") " pod="kserve-ci-e2e-test/sequence-graph-516af-7cdf98dc78-cwt5d" Apr 16 18:28:05.778350 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:05.778263 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af-openshift-service-ca-bundle\") pod \"sequence-graph-516af-7cdf98dc78-cwt5d\" (UID: \"c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af\") " pod="kserve-ci-e2e-test/sequence-graph-516af-7cdf98dc78-cwt5d" Apr 16 18:28:05.779057 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:05.779034 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af-openshift-service-ca-bundle\") pod \"sequence-graph-516af-7cdf98dc78-cwt5d\" (UID: \"c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af\") " pod="kserve-ci-e2e-test/sequence-graph-516af-7cdf98dc78-cwt5d" Apr 16 18:28:05.780458 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:05.780436 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af-proxy-tls\") pod \"sequence-graph-516af-7cdf98dc78-cwt5d\" (UID: \"c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af\") " pod="kserve-ci-e2e-test/sequence-graph-516af-7cdf98dc78-cwt5d" Apr 16 18:28:05.907882 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:05.907795 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-516af-7cdf98dc78-cwt5d" Apr 16 18:28:06.027558 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:06.027530 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-516af-7cdf98dc78-cwt5d"] Apr 16 18:28:06.030049 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:28:06.030024 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7e9eee7_daf5_4fa1_86a1_469d3d0ae7af.slice/crio-81037ca9a9c62789b0297471f16a3bf2cfc8180c30105673beb736f7ba5eb2de WatchSource:0}: Error finding container 81037ca9a9c62789b0297471f16a3bf2cfc8180c30105673beb736f7ba5eb2de: Status 404 returned error can't find the container with id 81037ca9a9c62789b0297471f16a3bf2cfc8180c30105673beb736f7ba5eb2de Apr 16 18:28:06.054638 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:06.054610 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-516af-7cdf98dc78-cwt5d" event={"ID":"c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af","Type":"ContainerStarted","Data":"81037ca9a9c62789b0297471f16a3bf2cfc8180c30105673beb736f7ba5eb2de"} Apr 16 18:28:07.058900 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:07.058866 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-516af-7cdf98dc78-cwt5d" event={"ID":"c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af","Type":"ContainerStarted","Data":"56f86da6801a981111bafed70944a9c383db1de8aabb895897e591cf7f91d5c2"} Apr 16 18:28:07.059276 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:07.058973 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/sequence-graph-516af-7cdf98dc78-cwt5d" Apr 16 18:28:07.078106 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:07.078052 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/sequence-graph-516af-7cdf98dc78-cwt5d" podStartSLOduration=2.078037521 podStartE2EDuration="2.078037521s" podCreationTimestamp="2026-04-16 18:28:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:28:07.077955894 +0000 UTC m=+1544.906919027" watchObservedRunningTime="2026-04-16 18:28:07.078037521 +0000 UTC m=+1544.907000656" Apr 16 18:28:07.972580 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:07.972542 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-ccdbb-59bfb89b8-8swsg" podUID="c1406e5f-cf63-4a83-b38c-0513f18af20a" containerName="ensemble-graph-ccdbb" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:28:08.000729 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:08.000700 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-df772-predictor-875748d7f-jdp2w" podUID="28dbc4e6-4622-4258-91ad-f18f9f5275ff" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.31:8080: connect: connection refused" Apr 16 18:28:12.972930 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:12.972894 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-ccdbb-59bfb89b8-8swsg" podUID="c1406e5f-cf63-4a83-b38c-0513f18af20a" containerName="ensemble-graph-ccdbb" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:28:13.068200 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:13.068172 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/sequence-graph-516af-7cdf98dc78-cwt5d" Apr 16 18:28:15.676214 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:15.676183 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-516af-7cdf98dc78-cwt5d"] Apr 16 18:28:15.676624 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:15.676373 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/sequence-graph-516af-7cdf98dc78-cwt5d" podUID="c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af" containerName="sequence-graph-516af" containerID="cri-o://56f86da6801a981111bafed70944a9c383db1de8aabb895897e591cf7f91d5c2" gracePeriod=30 Apr 16 18:28:15.806459 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:15.806394 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-516af-predictor-7dc5688b5-z5fmq"] Apr 16 18:28:15.806723 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:15.806692 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-516af-predictor-7dc5688b5-z5fmq" podUID="e07116fe-c239-43f6-beac-013fd89bbf35" containerName="kserve-container" containerID="cri-o://34c4a18aa99b7679852987cffd26d42bfcfb9d5e5e6e64333e600b131cc9ff0f" gracePeriod=30 Apr 16 18:28:15.830743 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:15.830713 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-ed24b-predictor-648cf478d9-kpr96"] Apr 16 18:28:15.834400 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:15.834379 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-ed24b-predictor-648cf478d9-kpr96" Apr 16 18:28:15.843729 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:15.843708 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-ed24b-predictor-648cf478d9-kpr96"] Apr 16 18:28:15.844997 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:15.844977 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-ed24b-predictor-648cf478d9-kpr96" Apr 16 18:28:16.012492 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:16.012445 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-ccdbb-59bfb89b8-8swsg" Apr 16 18:28:16.053894 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:16.053865 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1406e5f-cf63-4a83-b38c-0513f18af20a-openshift-service-ca-bundle\") pod \"c1406e5f-cf63-4a83-b38c-0513f18af20a\" (UID: \"c1406e5f-cf63-4a83-b38c-0513f18af20a\") " Apr 16 18:28:16.054051 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:16.053953 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c1406e5f-cf63-4a83-b38c-0513f18af20a-proxy-tls\") pod \"c1406e5f-cf63-4a83-b38c-0513f18af20a\" (UID: \"c1406e5f-cf63-4a83-b38c-0513f18af20a\") " Apr 16 18:28:16.054316 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:16.054282 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1406e5f-cf63-4a83-b38c-0513f18af20a-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "c1406e5f-cf63-4a83-b38c-0513f18af20a" (UID: "c1406e5f-cf63-4a83-b38c-0513f18af20a"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 18:28:16.056194 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:16.056169 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1406e5f-cf63-4a83-b38c-0513f18af20a-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "c1406e5f-cf63-4a83-b38c-0513f18af20a" (UID: "c1406e5f-cf63-4a83-b38c-0513f18af20a"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 18:28:16.086084 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:16.086052 2571 generic.go:358] "Generic (PLEG): container finished" podID="c1406e5f-cf63-4a83-b38c-0513f18af20a" containerID="426cb14f8c1441210261dd65098ea93e931da6ddce76b40591650ca605ef710d" exitCode=137 Apr 16 18:28:16.086242 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:16.086106 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-ccdbb-59bfb89b8-8swsg" event={"ID":"c1406e5f-cf63-4a83-b38c-0513f18af20a","Type":"ContainerDied","Data":"426cb14f8c1441210261dd65098ea93e931da6ddce76b40591650ca605ef710d"} Apr 16 18:28:16.086242 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:16.086126 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-ccdbb-59bfb89b8-8swsg" Apr 16 18:28:16.086242 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:16.086141 2571 scope.go:117] "RemoveContainer" containerID="426cb14f8c1441210261dd65098ea93e931da6ddce76b40591650ca605ef710d" Apr 16 18:28:16.086242 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:16.086132 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-ccdbb-59bfb89b8-8swsg" event={"ID":"c1406e5f-cf63-4a83-b38c-0513f18af20a","Type":"ContainerDied","Data":"5dff887b68dcc01afcc002e6947bac5f8d3af4a1f1c89c05b31636b0130c494c"} Apr 16 18:28:16.093961 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:16.093936 2571 scope.go:117] "RemoveContainer" containerID="426cb14f8c1441210261dd65098ea93e931da6ddce76b40591650ca605ef710d" Apr 16 18:28:16.094254 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:28:16.094233 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"426cb14f8c1441210261dd65098ea93e931da6ddce76b40591650ca605ef710d\": container with ID starting with 426cb14f8c1441210261dd65098ea93e931da6ddce76b40591650ca605ef710d not found: ID does not exist" containerID="426cb14f8c1441210261dd65098ea93e931da6ddce76b40591650ca605ef710d" Apr 16 18:28:16.094336 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:16.094263 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"426cb14f8c1441210261dd65098ea93e931da6ddce76b40591650ca605ef710d"} err="failed to get container status \"426cb14f8c1441210261dd65098ea93e931da6ddce76b40591650ca605ef710d\": rpc error: code = NotFound desc = could not find container \"426cb14f8c1441210261dd65098ea93e931da6ddce76b40591650ca605ef710d\": container with ID starting with 426cb14f8c1441210261dd65098ea93e931da6ddce76b40591650ca605ef710d not found: ID does not exist" Apr 16 18:28:16.110520 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:16.110494 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-ccdbb-59bfb89b8-8swsg"] Apr 16 18:28:16.113794 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:16.113770 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-ccdbb-59bfb89b8-8swsg"] Apr 16 18:28:16.154869 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:16.154844 2571 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c1406e5f-cf63-4a83-b38c-0513f18af20a-proxy-tls\") on node \"ip-10-0-128-59.ec2.internal\" DevicePath \"\"" Apr 16 18:28:16.154869 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:16.154866 2571 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1406e5f-cf63-4a83-b38c-0513f18af20a-openshift-service-ca-bundle\") on node \"ip-10-0-128-59.ec2.internal\" DevicePath \"\"" Apr 16 18:28:16.206233 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:16.206206 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-ed24b-predictor-648cf478d9-kpr96"] Apr 16 18:28:16.208618 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:28:16.208579 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8537ca0_09da_4ad3_b1da_78551585c1ee.slice/crio-cd5e5732a0b729611784b0a7fc8eea6a209d90eddb554b54f18bafedc1b16fb4 WatchSource:0}: Error finding container cd5e5732a0b729611784b0a7fc8eea6a209d90eddb554b54f18bafedc1b16fb4: Status 404 returned error can't find the container with id cd5e5732a0b729611784b0a7fc8eea6a209d90eddb554b54f18bafedc1b16fb4 Apr 16 18:28:16.716906 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:16.716874 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1406e5f-cf63-4a83-b38c-0513f18af20a" path="/var/lib/kubelet/pods/c1406e5f-cf63-4a83-b38c-0513f18af20a/volumes" Apr 16 18:28:16.846135 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:16.846096 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-516af-predictor-7dc5688b5-z5fmq" podUID="e07116fe-c239-43f6-beac-013fd89bbf35" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.29:8080: connect: connection refused" Apr 16 18:28:17.090375 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:17.090341 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-ed24b-predictor-648cf478d9-kpr96" event={"ID":"c8537ca0-09da-4ad3-b1da-78551585c1ee","Type":"ContainerStarted","Data":"d2cb424601be972e631c18825dbef57ad61a1532830b4c17909b993d91a195aa"} Apr 16 18:28:17.090375 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:17.090377 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-ed24b-predictor-648cf478d9-kpr96" event={"ID":"c8537ca0-09da-4ad3-b1da-78551585c1ee","Type":"ContainerStarted","Data":"cd5e5732a0b729611784b0a7fc8eea6a209d90eddb554b54f18bafedc1b16fb4"} Apr 16 18:28:17.090624 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:17.090573 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-ed24b-predictor-648cf478d9-kpr96" Apr 16 18:28:17.091848 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:17.091820 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-ed24b-predictor-648cf478d9-kpr96" podUID="c8537ca0-09da-4ad3-b1da-78551585c1ee" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.33:8080: connect: connection refused" Apr 16 18:28:17.112806 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:17.112751 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-ed24b-predictor-648cf478d9-kpr96" podStartSLOduration=2.112734692 podStartE2EDuration="2.112734692s" podCreationTimestamp="2026-04-16 18:28:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:28:17.112649061 +0000 UTC m=+1554.941612194" watchObservedRunningTime="2026-04-16 18:28:17.112734692 +0000 UTC m=+1554.941697830" Apr 16 18:28:18.000297 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:18.000253 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-df772-predictor-875748d7f-jdp2w" podUID="28dbc4e6-4622-4258-91ad-f18f9f5275ff" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.31:8080: connect: connection refused" Apr 16 18:28:18.066931 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:18.066894 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-516af-7cdf98dc78-cwt5d" podUID="c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af" containerName="sequence-graph-516af" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:28:18.094867 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:18.094832 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-ed24b-predictor-648cf478d9-kpr96" podUID="c8537ca0-09da-4ad3-b1da-78551585c1ee" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.33:8080: connect: connection refused" Apr 16 18:28:18.947597 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:18.947564 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-516af-predictor-7dc5688b5-z5fmq" Apr 16 18:28:19.097738 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:19.097701 2571 generic.go:358] "Generic (PLEG): container finished" podID="e07116fe-c239-43f6-beac-013fd89bbf35" containerID="34c4a18aa99b7679852987cffd26d42bfcfb9d5e5e6e64333e600b131cc9ff0f" exitCode=0 Apr 16 18:28:19.098092 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:19.097765 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-516af-predictor-7dc5688b5-z5fmq" Apr 16 18:28:19.098092 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:19.097763 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-516af-predictor-7dc5688b5-z5fmq" event={"ID":"e07116fe-c239-43f6-beac-013fd89bbf35","Type":"ContainerDied","Data":"34c4a18aa99b7679852987cffd26d42bfcfb9d5e5e6e64333e600b131cc9ff0f"} Apr 16 18:28:19.098092 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:19.097869 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-516af-predictor-7dc5688b5-z5fmq" event={"ID":"e07116fe-c239-43f6-beac-013fd89bbf35","Type":"ContainerDied","Data":"5fde49d36fa1dc46619932b062e60d1d0258dcdf203ffe22576f197f5b955579"} Apr 16 18:28:19.098092 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:19.097890 2571 scope.go:117] "RemoveContainer" containerID="34c4a18aa99b7679852987cffd26d42bfcfb9d5e5e6e64333e600b131cc9ff0f" Apr 16 18:28:19.105687 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:19.105612 2571 scope.go:117] "RemoveContainer" containerID="34c4a18aa99b7679852987cffd26d42bfcfb9d5e5e6e64333e600b131cc9ff0f" Apr 16 18:28:19.105997 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:28:19.105974 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34c4a18aa99b7679852987cffd26d42bfcfb9d5e5e6e64333e600b131cc9ff0f\": container with ID starting with 34c4a18aa99b7679852987cffd26d42bfcfb9d5e5e6e64333e600b131cc9ff0f not found: ID does not exist" containerID="34c4a18aa99b7679852987cffd26d42bfcfb9d5e5e6e64333e600b131cc9ff0f" Apr 16 18:28:19.106087 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:19.106011 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34c4a18aa99b7679852987cffd26d42bfcfb9d5e5e6e64333e600b131cc9ff0f"} err="failed to get container status \"34c4a18aa99b7679852987cffd26d42bfcfb9d5e5e6e64333e600b131cc9ff0f\": rpc error: code = NotFound desc = could not find container \"34c4a18aa99b7679852987cffd26d42bfcfb9d5e5e6e64333e600b131cc9ff0f\": container with ID starting with 34c4a18aa99b7679852987cffd26d42bfcfb9d5e5e6e64333e600b131cc9ff0f not found: ID does not exist" Apr 16 18:28:19.121094 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:19.121066 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-516af-predictor-7dc5688b5-z5fmq"] Apr 16 18:28:19.124206 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:19.124183 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-516af-predictor-7dc5688b5-z5fmq"] Apr 16 18:28:20.714694 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:20.714653 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e07116fe-c239-43f6-beac-013fd89bbf35" path="/var/lib/kubelet/pods/e07116fe-c239-43f6-beac-013fd89bbf35/volumes" Apr 16 18:28:23.066933 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:23.066851 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-516af-7cdf98dc78-cwt5d" podUID="c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af" containerName="sequence-graph-516af" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:28:28.000842 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:28.000805 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-df772-predictor-875748d7f-jdp2w" podUID="28dbc4e6-4622-4258-91ad-f18f9f5275ff" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.31:8080: connect: connection refused" Apr 16 18:28:28.066772 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:28.066733 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-516af-7cdf98dc78-cwt5d" podUID="c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af" containerName="sequence-graph-516af" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:28:28.066926 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:28.066830 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/sequence-graph-516af-7cdf98dc78-cwt5d" Apr 16 18:28:28.094763 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:28.094728 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-ed24b-predictor-648cf478d9-kpr96" podUID="c8537ca0-09da-4ad3-b1da-78551585c1ee" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.33:8080: connect: connection refused" Apr 16 18:28:33.066260 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:33.066214 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-516af-7cdf98dc78-cwt5d" podUID="c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af" containerName="sequence-graph-516af" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:28:38.001751 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:38.001724 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-df772-predictor-875748d7f-jdp2w" Apr 16 18:28:38.065912 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:38.065879 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-516af-7cdf98dc78-cwt5d" podUID="c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af" containerName="sequence-graph-516af" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:28:38.094906 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:38.094871 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-ed24b-predictor-648cf478d9-kpr96" podUID="c8537ca0-09da-4ad3-b1da-78551585c1ee" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.33:8080: connect: connection refused" Apr 16 18:28:43.066534 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:43.066488 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-516af-7cdf98dc78-cwt5d" podUID="c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af" containerName="sequence-graph-516af" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:28:45.699280 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:28:45.699256 2571 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7e9eee7_daf5_4fa1_86a1_469d3d0ae7af.slice/crio-conmon-56f86da6801a981111bafed70944a9c383db1de8aabb895897e591cf7f91d5c2.scope\": RecentStats: unable to find data in memory cache]" Apr 16 18:28:45.699668 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:28:45.699317 2571 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7e9eee7_daf5_4fa1_86a1_469d3d0ae7af.slice/crio-conmon-56f86da6801a981111bafed70944a9c383db1de8aabb895897e591cf7f91d5c2.scope\": RecentStats: unable to find data in memory cache]" Apr 16 18:28:45.699668 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:28:45.699365 2571 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7e9eee7_daf5_4fa1_86a1_469d3d0ae7af.slice/crio-81037ca9a9c62789b0297471f16a3bf2cfc8180c30105673beb736f7ba5eb2de\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7e9eee7_daf5_4fa1_86a1_469d3d0ae7af.slice/crio-conmon-56f86da6801a981111bafed70944a9c383db1de8aabb895897e591cf7f91d5c2.scope\": RecentStats: unable to find data in memory cache]" Apr 16 18:28:45.832923 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:45.832900 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-516af-7cdf98dc78-cwt5d" Apr 16 18:28:46.004801 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:46.004707 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af-proxy-tls\") pod \"c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af\" (UID: \"c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af\") " Apr 16 18:28:46.004801 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:46.004793 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af-openshift-service-ca-bundle\") pod \"c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af\" (UID: \"c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af\") " Apr 16 18:28:46.005161 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:46.005137 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af" (UID: "c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 18:28:46.006825 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:46.006799 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af" (UID: "c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 18:28:46.105723 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:46.105688 2571 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af-proxy-tls\") on node \"ip-10-0-128-59.ec2.internal\" DevicePath \"\"" Apr 16 18:28:46.105723 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:46.105718 2571 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af-openshift-service-ca-bundle\") on node \"ip-10-0-128-59.ec2.internal\" DevicePath \"\"" Apr 16 18:28:46.182154 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:46.182118 2571 generic.go:358] "Generic (PLEG): container finished" podID="c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af" containerID="56f86da6801a981111bafed70944a9c383db1de8aabb895897e591cf7f91d5c2" exitCode=0 Apr 16 18:28:46.182290 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:46.182182 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-516af-7cdf98dc78-cwt5d" Apr 16 18:28:46.182290 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:46.182183 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-516af-7cdf98dc78-cwt5d" event={"ID":"c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af","Type":"ContainerDied","Data":"56f86da6801a981111bafed70944a9c383db1de8aabb895897e591cf7f91d5c2"} Apr 16 18:28:46.182290 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:46.182220 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-516af-7cdf98dc78-cwt5d" event={"ID":"c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af","Type":"ContainerDied","Data":"81037ca9a9c62789b0297471f16a3bf2cfc8180c30105673beb736f7ba5eb2de"} Apr 16 18:28:46.182290 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:46.182236 2571 scope.go:117] "RemoveContainer" containerID="56f86da6801a981111bafed70944a9c383db1de8aabb895897e591cf7f91d5c2" Apr 16 18:28:46.190381 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:46.190359 2571 scope.go:117] "RemoveContainer" containerID="56f86da6801a981111bafed70944a9c383db1de8aabb895897e591cf7f91d5c2" Apr 16 18:28:46.190627 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:28:46.190607 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56f86da6801a981111bafed70944a9c383db1de8aabb895897e591cf7f91d5c2\": container with ID starting with 56f86da6801a981111bafed70944a9c383db1de8aabb895897e591cf7f91d5c2 not found: ID does not exist" containerID="56f86da6801a981111bafed70944a9c383db1de8aabb895897e591cf7f91d5c2" Apr 16 18:28:46.190702 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:46.190634 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56f86da6801a981111bafed70944a9c383db1de8aabb895897e591cf7f91d5c2"} err="failed to get container status \"56f86da6801a981111bafed70944a9c383db1de8aabb895897e591cf7f91d5c2\": rpc error: code = NotFound desc = could not find container \"56f86da6801a981111bafed70944a9c383db1de8aabb895897e591cf7f91d5c2\": container with ID starting with 56f86da6801a981111bafed70944a9c383db1de8aabb895897e591cf7f91d5c2 not found: ID does not exist" Apr 16 18:28:46.201903 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:46.201881 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-516af-7cdf98dc78-cwt5d"] Apr 16 18:28:46.205625 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:46.205604 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-516af-7cdf98dc78-cwt5d"] Apr 16 18:28:46.717368 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:46.717128 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af" path="/var/lib/kubelet/pods/c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af/volumes" Apr 16 18:28:48.094921 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:48.094878 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-ed24b-predictor-648cf478d9-kpr96" podUID="c8537ca0-09da-4ad3-b1da-78551585c1ee" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.33:8080: connect: connection refused" Apr 16 18:28:56.108423 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:56.108381 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-df772-7fb98745c-jlqxb"] Apr 16 18:28:56.111009 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:56.108727 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c1406e5f-cf63-4a83-b38c-0513f18af20a" containerName="ensemble-graph-ccdbb" Apr 16 18:28:56.111009 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:56.108739 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1406e5f-cf63-4a83-b38c-0513f18af20a" containerName="ensemble-graph-ccdbb" Apr 16 18:28:56.111009 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:56.108764 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af" containerName="sequence-graph-516af" Apr 16 18:28:56.111009 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:56.108771 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af" containerName="sequence-graph-516af" Apr 16 18:28:56.111009 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:56.108781 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e07116fe-c239-43f6-beac-013fd89bbf35" containerName="kserve-container" Apr 16 18:28:56.111009 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:56.108786 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="e07116fe-c239-43f6-beac-013fd89bbf35" containerName="kserve-container" Apr 16 18:28:56.111009 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:56.108833 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="e07116fe-c239-43f6-beac-013fd89bbf35" containerName="kserve-container" Apr 16 18:28:56.111009 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:56.108846 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="c7e9eee7-daf5-4fa1-86a1-469d3d0ae7af" containerName="sequence-graph-516af" Apr 16 18:28:56.111009 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:56.108854 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="c1406e5f-cf63-4a83-b38c-0513f18af20a" containerName="ensemble-graph-ccdbb" Apr 16 18:28:56.111854 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:56.111838 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-df772-7fb98745c-jlqxb" Apr 16 18:28:56.113747 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:56.113723 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"ensemble-graph-df772-kube-rbac-proxy-sar-config\"" Apr 16 18:28:56.113910 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:56.113787 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"openshift-service-ca.crt\"" Apr 16 18:28:56.113910 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:56.113803 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"ensemble-graph-df772-serving-cert\"" Apr 16 18:28:56.118562 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:56.118534 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-df772-7fb98745c-jlqxb"] Apr 16 18:28:56.288565 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:56.288527 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c062514c-adc1-4f8f-ad51-7a656f69c3eb-openshift-service-ca-bundle\") pod \"ensemble-graph-df772-7fb98745c-jlqxb\" (UID: \"c062514c-adc1-4f8f-ad51-7a656f69c3eb\") " pod="kserve-ci-e2e-test/ensemble-graph-df772-7fb98745c-jlqxb" Apr 16 18:28:56.288565 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:56.288571 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c062514c-adc1-4f8f-ad51-7a656f69c3eb-proxy-tls\") pod \"ensemble-graph-df772-7fb98745c-jlqxb\" (UID: \"c062514c-adc1-4f8f-ad51-7a656f69c3eb\") " pod="kserve-ci-e2e-test/ensemble-graph-df772-7fb98745c-jlqxb" Apr 16 18:28:56.389334 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:56.389240 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c062514c-adc1-4f8f-ad51-7a656f69c3eb-openshift-service-ca-bundle\") pod \"ensemble-graph-df772-7fb98745c-jlqxb\" (UID: \"c062514c-adc1-4f8f-ad51-7a656f69c3eb\") " pod="kserve-ci-e2e-test/ensemble-graph-df772-7fb98745c-jlqxb" Apr 16 18:28:56.389334 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:56.389285 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c062514c-adc1-4f8f-ad51-7a656f69c3eb-proxy-tls\") pod \"ensemble-graph-df772-7fb98745c-jlqxb\" (UID: \"c062514c-adc1-4f8f-ad51-7a656f69c3eb\") " pod="kserve-ci-e2e-test/ensemble-graph-df772-7fb98745c-jlqxb" Apr 16 18:28:56.389992 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:56.389961 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c062514c-adc1-4f8f-ad51-7a656f69c3eb-openshift-service-ca-bundle\") pod \"ensemble-graph-df772-7fb98745c-jlqxb\" (UID: \"c062514c-adc1-4f8f-ad51-7a656f69c3eb\") " pod="kserve-ci-e2e-test/ensemble-graph-df772-7fb98745c-jlqxb" Apr 16 18:28:56.391634 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:56.391616 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c062514c-adc1-4f8f-ad51-7a656f69c3eb-proxy-tls\") pod \"ensemble-graph-df772-7fb98745c-jlqxb\" (UID: \"c062514c-adc1-4f8f-ad51-7a656f69c3eb\") " pod="kserve-ci-e2e-test/ensemble-graph-df772-7fb98745c-jlqxb" Apr 16 18:28:56.422518 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:56.422485 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-df772-7fb98745c-jlqxb" Apr 16 18:28:56.542304 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:56.542282 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-df772-7fb98745c-jlqxb"] Apr 16 18:28:56.544687 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:28:56.544660 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc062514c_adc1_4f8f_ad51_7a656f69c3eb.slice/crio-c47ca6f3c4b91df17a352ea82cdb103a3d5cb7afb600923e160e932c25088825 WatchSource:0}: Error finding container c47ca6f3c4b91df17a352ea82cdb103a3d5cb7afb600923e160e932c25088825: Status 404 returned error can't find the container with id c47ca6f3c4b91df17a352ea82cdb103a3d5cb7afb600923e160e932c25088825 Apr 16 18:28:57.218776 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:57.218741 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-df772-7fb98745c-jlqxb" event={"ID":"c062514c-adc1-4f8f-ad51-7a656f69c3eb","Type":"ContainerStarted","Data":"528c42215458e636bd2bf1e6b5351d7cf9d84af6db3d2d02b212e2d6d38388af"} Apr 16 18:28:57.218776 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:57.218779 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-df772-7fb98745c-jlqxb" event={"ID":"c062514c-adc1-4f8f-ad51-7a656f69c3eb","Type":"ContainerStarted","Data":"c47ca6f3c4b91df17a352ea82cdb103a3d5cb7afb600923e160e932c25088825"} Apr 16 18:28:57.219199 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:57.218906 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/ensemble-graph-df772-7fb98745c-jlqxb" Apr 16 18:28:57.235148 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:57.235098 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/ensemble-graph-df772-7fb98745c-jlqxb" podStartSLOduration=1.235084619 podStartE2EDuration="1.235084619s" podCreationTimestamp="2026-04-16 18:28:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:28:57.233210221 +0000 UTC m=+1595.062173354" watchObservedRunningTime="2026-04-16 18:28:57.235084619 +0000 UTC m=+1595.064047752" Apr 16 18:28:58.095226 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:28:58.095190 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-ed24b-predictor-648cf478d9-kpr96" podUID="c8537ca0-09da-4ad3-b1da-78551585c1ee" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.33:8080: connect: connection refused" Apr 16 18:29:03.227648 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:29:03.227553 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/ensemble-graph-df772-7fb98745c-jlqxb" Apr 16 18:29:08.096634 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:29:08.096602 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-ed24b-predictor-648cf478d9-kpr96" Apr 16 18:29:25.921501 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:29:25.921467 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/sequence-graph-ed24b-7c6b74f4df-gtjnk"] Apr 16 18:29:25.925907 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:29:25.925891 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-ed24b-7c6b74f4df-gtjnk" Apr 16 18:29:25.928556 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:29:25.928515 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"sequence-graph-ed24b-serving-cert\"" Apr 16 18:29:25.928753 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:29:25.928733 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"sequence-graph-ed24b-kube-rbac-proxy-sar-config\"" Apr 16 18:29:25.946598 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:29:25.946566 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-ed24b-7c6b74f4df-gtjnk"] Apr 16 18:29:26.000049 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:29:26.000015 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/45d446be-c5bb-4512-b301-7036b270391e-proxy-tls\") pod \"sequence-graph-ed24b-7c6b74f4df-gtjnk\" (UID: \"45d446be-c5bb-4512-b301-7036b270391e\") " pod="kserve-ci-e2e-test/sequence-graph-ed24b-7c6b74f4df-gtjnk" Apr 16 18:29:26.000205 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:29:26.000079 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45d446be-c5bb-4512-b301-7036b270391e-openshift-service-ca-bundle\") pod \"sequence-graph-ed24b-7c6b74f4df-gtjnk\" (UID: \"45d446be-c5bb-4512-b301-7036b270391e\") " pod="kserve-ci-e2e-test/sequence-graph-ed24b-7c6b74f4df-gtjnk" Apr 16 18:29:26.100721 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:29:26.100691 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45d446be-c5bb-4512-b301-7036b270391e-openshift-service-ca-bundle\") pod \"sequence-graph-ed24b-7c6b74f4df-gtjnk\" (UID: \"45d446be-c5bb-4512-b301-7036b270391e\") " pod="kserve-ci-e2e-test/sequence-graph-ed24b-7c6b74f4df-gtjnk" Apr 16 18:29:26.100866 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:29:26.100766 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/45d446be-c5bb-4512-b301-7036b270391e-proxy-tls\") pod \"sequence-graph-ed24b-7c6b74f4df-gtjnk\" (UID: \"45d446be-c5bb-4512-b301-7036b270391e\") " pod="kserve-ci-e2e-test/sequence-graph-ed24b-7c6b74f4df-gtjnk" Apr 16 18:29:26.101352 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:29:26.101330 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45d446be-c5bb-4512-b301-7036b270391e-openshift-service-ca-bundle\") pod \"sequence-graph-ed24b-7c6b74f4df-gtjnk\" (UID: \"45d446be-c5bb-4512-b301-7036b270391e\") " pod="kserve-ci-e2e-test/sequence-graph-ed24b-7c6b74f4df-gtjnk" Apr 16 18:29:26.103066 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:29:26.103044 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/45d446be-c5bb-4512-b301-7036b270391e-proxy-tls\") pod \"sequence-graph-ed24b-7c6b74f4df-gtjnk\" (UID: \"45d446be-c5bb-4512-b301-7036b270391e\") " pod="kserve-ci-e2e-test/sequence-graph-ed24b-7c6b74f4df-gtjnk" Apr 16 18:29:26.236660 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:29:26.236564 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-ed24b-7c6b74f4df-gtjnk" Apr 16 18:29:26.352622 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:29:26.352582 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-ed24b-7c6b74f4df-gtjnk"] Apr 16 18:29:26.354663 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:29:26.354638 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45d446be_c5bb_4512_b301_7036b270391e.slice/crio-a6fe8d256e2613cbf38a1aedf6d1a0f1a2e4e8de8b0123d1f714cac3571fb67e WatchSource:0}: Error finding container a6fe8d256e2613cbf38a1aedf6d1a0f1a2e4e8de8b0123d1f714cac3571fb67e: Status 404 returned error can't find the container with id a6fe8d256e2613cbf38a1aedf6d1a0f1a2e4e8de8b0123d1f714cac3571fb67e Apr 16 18:29:27.306455 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:29:27.306422 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-ed24b-7c6b74f4df-gtjnk" event={"ID":"45d446be-c5bb-4512-b301-7036b270391e","Type":"ContainerStarted","Data":"e7efa95ca5586966aad354fc6272dc12729037fa20646e74f200c8149602bcf3"} Apr 16 18:29:27.306455 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:29:27.306458 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-ed24b-7c6b74f4df-gtjnk" event={"ID":"45d446be-c5bb-4512-b301-7036b270391e","Type":"ContainerStarted","Data":"a6fe8d256e2613cbf38a1aedf6d1a0f1a2e4e8de8b0123d1f714cac3571fb67e"} Apr 16 18:29:27.306896 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:29:27.306585 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/sequence-graph-ed24b-7c6b74f4df-gtjnk" Apr 16 18:29:27.335630 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:29:27.335544 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/sequence-graph-ed24b-7c6b74f4df-gtjnk" podStartSLOduration=2.335526171 podStartE2EDuration="2.335526171s" podCreationTimestamp="2026-04-16 18:29:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:29:27.334400909 +0000 UTC m=+1625.163364041" watchObservedRunningTime="2026-04-16 18:29:27.335526171 +0000 UTC m=+1625.164489306" Apr 16 18:29:33.315321 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:29:33.315296 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/sequence-graph-ed24b-7c6b74f4df-gtjnk" Apr 16 18:37:10.768116 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:10.768078 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-df772-7fb98745c-jlqxb"] Apr 16 18:37:10.770519 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:10.768391 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/ensemble-graph-df772-7fb98745c-jlqxb" podUID="c062514c-adc1-4f8f-ad51-7a656f69c3eb" containerName="ensemble-graph-df772" containerID="cri-o://528c42215458e636bd2bf1e6b5351d7cf9d84af6db3d2d02b212e2d6d38388af" gracePeriod=30 Apr 16 18:37:10.881517 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:10.881469 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-df772-predictor-875748d7f-jdp2w"] Apr 16 18:37:10.881821 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:10.881792 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-df772-predictor-875748d7f-jdp2w" podUID="28dbc4e6-4622-4258-91ad-f18f9f5275ff" containerName="kserve-container" containerID="cri-o://e2380fe4857e235a45d96ac4c45dcddf6875ab32b5a393fe6cc143d947902e1f" gracePeriod=30 Apr 16 18:37:10.927043 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:10.927010 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-78581-predictor-d6d9bf857-m87d6"] Apr 16 18:37:10.930196 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:10.930173 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-78581-predictor-d6d9bf857-m87d6" Apr 16 18:37:10.942297 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:10.942275 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-78581-predictor-d6d9bf857-m87d6" Apr 16 18:37:10.947903 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:10.947877 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-78581-predictor-d6d9bf857-m87d6"] Apr 16 18:37:11.070971 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:11.070942 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-78581-predictor-d6d9bf857-m87d6"] Apr 16 18:37:11.073931 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:37:11.073901 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod996efd22_cdee_405a_8852_b2952981236a.slice/crio-b16177dea9716e0f15392c38c5bbb9dc551d12f2eb2a964c218d70adb6ddbd5e WatchSource:0}: Error finding container b16177dea9716e0f15392c38c5bbb9dc551d12f2eb2a964c218d70adb6ddbd5e: Status 404 returned error can't find the container with id b16177dea9716e0f15392c38c5bbb9dc551d12f2eb2a964c218d70adb6ddbd5e Apr 16 18:37:11.075740 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:11.075720 2571 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 18:37:11.610206 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:11.610162 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-78581-predictor-d6d9bf857-m87d6" event={"ID":"996efd22-cdee-405a-8852-b2952981236a","Type":"ContainerStarted","Data":"26661d3aba82e96434db643afe4ee554b467e36941732252125315b591109974"} Apr 16 18:37:11.610391 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:11.610212 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-78581-predictor-d6d9bf857-m87d6" event={"ID":"996efd22-cdee-405a-8852-b2952981236a","Type":"ContainerStarted","Data":"b16177dea9716e0f15392c38c5bbb9dc551d12f2eb2a964c218d70adb6ddbd5e"} Apr 16 18:37:11.610391 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:11.610353 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-78581-predictor-d6d9bf857-m87d6" Apr 16 18:37:11.611669 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:11.611642 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-78581-predictor-d6d9bf857-m87d6" podUID="996efd22-cdee-405a-8852-b2952981236a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.36:8080: connect: connection refused" Apr 16 18:37:11.625291 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:11.625254 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-78581-predictor-d6d9bf857-m87d6" podStartSLOduration=1.625242671 podStartE2EDuration="1.625242671s" podCreationTimestamp="2026-04-16 18:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:37:11.623452045 +0000 UTC m=+2089.452415177" watchObservedRunningTime="2026-04-16 18:37:11.625242671 +0000 UTC m=+2089.454205875" Apr 16 18:37:12.613604 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:12.613554 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-78581-predictor-d6d9bf857-m87d6" podUID="996efd22-cdee-405a-8852-b2952981236a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.36:8080: connect: connection refused" Apr 16 18:37:13.225459 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:13.225420 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-df772-7fb98745c-jlqxb" podUID="c062514c-adc1-4f8f-ad51-7a656f69c3eb" containerName="ensemble-graph-df772" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:37:14.026744 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:14.026716 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-df772-predictor-875748d7f-jdp2w" Apr 16 18:37:14.621074 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:14.621041 2571 generic.go:358] "Generic (PLEG): container finished" podID="28dbc4e6-4622-4258-91ad-f18f9f5275ff" containerID="e2380fe4857e235a45d96ac4c45dcddf6875ab32b5a393fe6cc143d947902e1f" exitCode=0 Apr 16 18:37:14.621244 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:14.621101 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-df772-predictor-875748d7f-jdp2w" Apr 16 18:37:14.621244 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:14.621127 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-df772-predictor-875748d7f-jdp2w" event={"ID":"28dbc4e6-4622-4258-91ad-f18f9f5275ff","Type":"ContainerDied","Data":"e2380fe4857e235a45d96ac4c45dcddf6875ab32b5a393fe6cc143d947902e1f"} Apr 16 18:37:14.621244 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:14.621168 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-df772-predictor-875748d7f-jdp2w" event={"ID":"28dbc4e6-4622-4258-91ad-f18f9f5275ff","Type":"ContainerDied","Data":"64c36e8cf093cba1eeb6e4055ed9ebe541f48d996c54d586c6dcbdd9398ff489"} Apr 16 18:37:14.621244 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:14.621184 2571 scope.go:117] "RemoveContainer" containerID="e2380fe4857e235a45d96ac4c45dcddf6875ab32b5a393fe6cc143d947902e1f" Apr 16 18:37:14.629120 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:14.629100 2571 scope.go:117] "RemoveContainer" containerID="e2380fe4857e235a45d96ac4c45dcddf6875ab32b5a393fe6cc143d947902e1f" Apr 16 18:37:14.629358 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:37:14.629341 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2380fe4857e235a45d96ac4c45dcddf6875ab32b5a393fe6cc143d947902e1f\": container with ID starting with e2380fe4857e235a45d96ac4c45dcddf6875ab32b5a393fe6cc143d947902e1f not found: ID does not exist" containerID="e2380fe4857e235a45d96ac4c45dcddf6875ab32b5a393fe6cc143d947902e1f" Apr 16 18:37:14.629410 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:14.629365 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2380fe4857e235a45d96ac4c45dcddf6875ab32b5a393fe6cc143d947902e1f"} err="failed to get container status \"e2380fe4857e235a45d96ac4c45dcddf6875ab32b5a393fe6cc143d947902e1f\": rpc error: code = NotFound desc = could not find container \"e2380fe4857e235a45d96ac4c45dcddf6875ab32b5a393fe6cc143d947902e1f\": container with ID starting with e2380fe4857e235a45d96ac4c45dcddf6875ab32b5a393fe6cc143d947902e1f not found: ID does not exist" Apr 16 18:37:14.639733 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:14.639714 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-df772-predictor-875748d7f-jdp2w"] Apr 16 18:37:14.643652 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:14.643633 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-df772-predictor-875748d7f-jdp2w"] Apr 16 18:37:14.714815 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:14.714790 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28dbc4e6-4622-4258-91ad-f18f9f5275ff" path="/var/lib/kubelet/pods/28dbc4e6-4622-4258-91ad-f18f9f5275ff/volumes" Apr 16 18:37:18.225660 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:18.225622 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-df772-7fb98745c-jlqxb" podUID="c062514c-adc1-4f8f-ad51-7a656f69c3eb" containerName="ensemble-graph-df772" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:37:22.613978 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:22.613938 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-78581-predictor-d6d9bf857-m87d6" podUID="996efd22-cdee-405a-8852-b2952981236a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.36:8080: connect: connection refused" Apr 16 18:37:23.224925 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:23.224884 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-df772-7fb98745c-jlqxb" podUID="c062514c-adc1-4f8f-ad51-7a656f69c3eb" containerName="ensemble-graph-df772" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:37:23.225095 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:23.225011 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/ensemble-graph-df772-7fb98745c-jlqxb" Apr 16 18:37:28.224988 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:28.224948 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-df772-7fb98745c-jlqxb" podUID="c062514c-adc1-4f8f-ad51-7a656f69c3eb" containerName="ensemble-graph-df772" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:37:32.614208 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:32.614170 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-78581-predictor-d6d9bf857-m87d6" podUID="996efd22-cdee-405a-8852-b2952981236a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.36:8080: connect: connection refused" Apr 16 18:37:33.226068 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:33.226028 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-df772-7fb98745c-jlqxb" podUID="c062514c-adc1-4f8f-ad51-7a656f69c3eb" containerName="ensemble-graph-df772" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:37:38.226078 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:38.226031 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-df772-7fb98745c-jlqxb" podUID="c062514c-adc1-4f8f-ad51-7a656f69c3eb" containerName="ensemble-graph-df772" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:37:40.608582 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:40.608551 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-ed24b-7c6b74f4df-gtjnk"] Apr 16 18:37:40.609155 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:40.608992 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/sequence-graph-ed24b-7c6b74f4df-gtjnk" podUID="45d446be-c5bb-4512-b301-7036b270391e" containerName="sequence-graph-ed24b" containerID="cri-o://e7efa95ca5586966aad354fc6272dc12729037fa20646e74f200c8149602bcf3" gracePeriod=30 Apr 16 18:37:40.752579 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:40.752538 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-ed24b-predictor-648cf478d9-kpr96"] Apr 16 18:37:40.752833 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:40.752808 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-ed24b-predictor-648cf478d9-kpr96" podUID="c8537ca0-09da-4ad3-b1da-78551585c1ee" containerName="kserve-container" containerID="cri-o://d2cb424601be972e631c18825dbef57ad61a1532830b4c17909b993d91a195aa" gracePeriod=30 Apr 16 18:37:40.772913 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:40.772883 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-89f53-predictor-646dcc4bbf-8hgx4"] Apr 16 18:37:40.773231 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:40.773218 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="28dbc4e6-4622-4258-91ad-f18f9f5275ff" containerName="kserve-container" Apr 16 18:37:40.773324 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:40.773233 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="28dbc4e6-4622-4258-91ad-f18f9f5275ff" containerName="kserve-container" Apr 16 18:37:40.773324 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:40.773306 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="28dbc4e6-4622-4258-91ad-f18f9f5275ff" containerName="kserve-container" Apr 16 18:37:40.777901 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:40.777881 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-89f53-predictor-646dcc4bbf-8hgx4" Apr 16 18:37:40.781872 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:40.781851 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-89f53-predictor-646dcc4bbf-8hgx4"] Apr 16 18:37:40.837973 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:40.837954 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-89f53-predictor-646dcc4bbf-8hgx4" Apr 16 18:37:40.958981 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:40.958950 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-89f53-predictor-646dcc4bbf-8hgx4"] Apr 16 18:37:40.962146 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:37:40.962119 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28d7194e_4cc1_46d4_befc_1faef9286d01.slice/crio-c33a2eca72bfcdc7a2fe65ab3f50a69e79667ddb9301eda58385b19833c5b1e9 WatchSource:0}: Error finding container c33a2eca72bfcdc7a2fe65ab3f50a69e79667ddb9301eda58385b19833c5b1e9: Status 404 returned error can't find the container with id c33a2eca72bfcdc7a2fe65ab3f50a69e79667ddb9301eda58385b19833c5b1e9 Apr 16 18:37:41.398103 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:41.398080 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-df772-7fb98745c-jlqxb" Apr 16 18:37:41.525957 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:41.525920 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c062514c-adc1-4f8f-ad51-7a656f69c3eb-proxy-tls\") pod \"c062514c-adc1-4f8f-ad51-7a656f69c3eb\" (UID: \"c062514c-adc1-4f8f-ad51-7a656f69c3eb\") " Apr 16 18:37:41.526130 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:41.525983 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c062514c-adc1-4f8f-ad51-7a656f69c3eb-openshift-service-ca-bundle\") pod \"c062514c-adc1-4f8f-ad51-7a656f69c3eb\" (UID: \"c062514c-adc1-4f8f-ad51-7a656f69c3eb\") " Apr 16 18:37:41.526385 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:41.526359 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c062514c-adc1-4f8f-ad51-7a656f69c3eb-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "c062514c-adc1-4f8f-ad51-7a656f69c3eb" (UID: "c062514c-adc1-4f8f-ad51-7a656f69c3eb"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 18:37:41.527952 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:41.527931 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c062514c-adc1-4f8f-ad51-7a656f69c3eb-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "c062514c-adc1-4f8f-ad51-7a656f69c3eb" (UID: "c062514c-adc1-4f8f-ad51-7a656f69c3eb"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 18:37:41.626768 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:41.626737 2571 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c062514c-adc1-4f8f-ad51-7a656f69c3eb-openshift-service-ca-bundle\") on node \"ip-10-0-128-59.ec2.internal\" DevicePath \"\"" Apr 16 18:37:41.626768 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:41.626764 2571 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c062514c-adc1-4f8f-ad51-7a656f69c3eb-proxy-tls\") on node \"ip-10-0-128-59.ec2.internal\" DevicePath \"\"" Apr 16 18:37:41.699237 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:41.699201 2571 generic.go:358] "Generic (PLEG): container finished" podID="c062514c-adc1-4f8f-ad51-7a656f69c3eb" containerID="528c42215458e636bd2bf1e6b5351d7cf9d84af6db3d2d02b212e2d6d38388af" exitCode=0 Apr 16 18:37:41.699388 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:41.699275 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-df772-7fb98745c-jlqxb" Apr 16 18:37:41.699388 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:41.699290 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-df772-7fb98745c-jlqxb" event={"ID":"c062514c-adc1-4f8f-ad51-7a656f69c3eb","Type":"ContainerDied","Data":"528c42215458e636bd2bf1e6b5351d7cf9d84af6db3d2d02b212e2d6d38388af"} Apr 16 18:37:41.699388 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:41.699324 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-df772-7fb98745c-jlqxb" event={"ID":"c062514c-adc1-4f8f-ad51-7a656f69c3eb","Type":"ContainerDied","Data":"c47ca6f3c4b91df17a352ea82cdb103a3d5cb7afb600923e160e932c25088825"} Apr 16 18:37:41.699388 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:41.699343 2571 scope.go:117] "RemoveContainer" containerID="528c42215458e636bd2bf1e6b5351d7cf9d84af6db3d2d02b212e2d6d38388af" Apr 16 18:37:41.700735 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:41.700713 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-89f53-predictor-646dcc4bbf-8hgx4" event={"ID":"28d7194e-4cc1-46d4-befc-1faef9286d01","Type":"ContainerStarted","Data":"a83b03e6f408a702be5fd57d4390a257702306a3046b14404577c1e764a67574"} Apr 16 18:37:41.700843 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:41.700744 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-89f53-predictor-646dcc4bbf-8hgx4" event={"ID":"28d7194e-4cc1-46d4-befc-1faef9286d01","Type":"ContainerStarted","Data":"c33a2eca72bfcdc7a2fe65ab3f50a69e79667ddb9301eda58385b19833c5b1e9"} Apr 16 18:37:41.700909 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:41.700895 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-89f53-predictor-646dcc4bbf-8hgx4" Apr 16 18:37:41.702435 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:41.702408 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-89f53-predictor-646dcc4bbf-8hgx4" podUID="28d7194e-4cc1-46d4-befc-1faef9286d01" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.37:8080: connect: connection refused" Apr 16 18:37:41.710003 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:41.709905 2571 scope.go:117] "RemoveContainer" containerID="528c42215458e636bd2bf1e6b5351d7cf9d84af6db3d2d02b212e2d6d38388af" Apr 16 18:37:41.710332 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:37:41.710268 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"528c42215458e636bd2bf1e6b5351d7cf9d84af6db3d2d02b212e2d6d38388af\": container with ID starting with 528c42215458e636bd2bf1e6b5351d7cf9d84af6db3d2d02b212e2d6d38388af not found: ID does not exist" containerID="528c42215458e636bd2bf1e6b5351d7cf9d84af6db3d2d02b212e2d6d38388af" Apr 16 18:37:41.710332 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:41.710302 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"528c42215458e636bd2bf1e6b5351d7cf9d84af6db3d2d02b212e2d6d38388af"} err="failed to get container status \"528c42215458e636bd2bf1e6b5351d7cf9d84af6db3d2d02b212e2d6d38388af\": rpc error: code = NotFound desc = could not find container \"528c42215458e636bd2bf1e6b5351d7cf9d84af6db3d2d02b212e2d6d38388af\": container with ID starting with 528c42215458e636bd2bf1e6b5351d7cf9d84af6db3d2d02b212e2d6d38388af not found: ID does not exist" Apr 16 18:37:41.716379 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:41.716333 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-89f53-predictor-646dcc4bbf-8hgx4" podStartSLOduration=1.7163169759999999 podStartE2EDuration="1.716316976s" podCreationTimestamp="2026-04-16 18:37:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:37:41.715271859 +0000 UTC m=+2119.544234989" watchObservedRunningTime="2026-04-16 18:37:41.716316976 +0000 UTC m=+2119.545280110" Apr 16 18:37:41.728863 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:41.728839 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-df772-7fb98745c-jlqxb"] Apr 16 18:37:41.732169 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:41.732146 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-df772-7fb98745c-jlqxb"] Apr 16 18:37:42.613650 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:42.613609 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-78581-predictor-d6d9bf857-m87d6" podUID="996efd22-cdee-405a-8852-b2952981236a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.36:8080: connect: connection refused" Apr 16 18:37:42.704945 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:42.704902 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-89f53-predictor-646dcc4bbf-8hgx4" podUID="28d7194e-4cc1-46d4-befc-1faef9286d01" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.37:8080: connect: connection refused" Apr 16 18:37:42.715793 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:42.715756 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c062514c-adc1-4f8f-ad51-7a656f69c3eb" path="/var/lib/kubelet/pods/c062514c-adc1-4f8f-ad51-7a656f69c3eb/volumes" Apr 16 18:37:43.316616 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:43.316546 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-ed24b-7c6b74f4df-gtjnk" podUID="45d446be-c5bb-4512-b301-7036b270391e" containerName="sequence-graph-ed24b" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:37:43.896570 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:43.896548 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-ed24b-predictor-648cf478d9-kpr96" Apr 16 18:37:44.712492 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:44.712446 2571 generic.go:358] "Generic (PLEG): container finished" podID="c8537ca0-09da-4ad3-b1da-78551585c1ee" containerID="d2cb424601be972e631c18825dbef57ad61a1532830b4c17909b993d91a195aa" exitCode=0 Apr 16 18:37:44.712683 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:44.712578 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-ed24b-predictor-648cf478d9-kpr96" Apr 16 18:37:44.714009 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:44.713983 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-ed24b-predictor-648cf478d9-kpr96" event={"ID":"c8537ca0-09da-4ad3-b1da-78551585c1ee","Type":"ContainerDied","Data":"d2cb424601be972e631c18825dbef57ad61a1532830b4c17909b993d91a195aa"} Apr 16 18:37:44.714009 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:44.714013 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-ed24b-predictor-648cf478d9-kpr96" event={"ID":"c8537ca0-09da-4ad3-b1da-78551585c1ee","Type":"ContainerDied","Data":"cd5e5732a0b729611784b0a7fc8eea6a209d90eddb554b54f18bafedc1b16fb4"} Apr 16 18:37:44.714165 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:44.714028 2571 scope.go:117] "RemoveContainer" containerID="d2cb424601be972e631c18825dbef57ad61a1532830b4c17909b993d91a195aa" Apr 16 18:37:44.723313 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:44.723292 2571 scope.go:117] "RemoveContainer" containerID="d2cb424601be972e631c18825dbef57ad61a1532830b4c17909b993d91a195aa" Apr 16 18:37:44.723549 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:37:44.723530 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2cb424601be972e631c18825dbef57ad61a1532830b4c17909b993d91a195aa\": container with ID starting with d2cb424601be972e631c18825dbef57ad61a1532830b4c17909b993d91a195aa not found: ID does not exist" containerID="d2cb424601be972e631c18825dbef57ad61a1532830b4c17909b993d91a195aa" Apr 16 18:37:44.723625 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:44.723560 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2cb424601be972e631c18825dbef57ad61a1532830b4c17909b993d91a195aa"} err="failed to get container status \"d2cb424601be972e631c18825dbef57ad61a1532830b4c17909b993d91a195aa\": rpc error: code = NotFound desc = could not find container \"d2cb424601be972e631c18825dbef57ad61a1532830b4c17909b993d91a195aa\": container with ID starting with d2cb424601be972e631c18825dbef57ad61a1532830b4c17909b993d91a195aa not found: ID does not exist" Apr 16 18:37:44.736266 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:44.736244 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-ed24b-predictor-648cf478d9-kpr96"] Apr 16 18:37:44.741845 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:44.741824 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-ed24b-predictor-648cf478d9-kpr96"] Apr 16 18:37:46.715498 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:46.715463 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8537ca0-09da-4ad3-b1da-78551585c1ee" path="/var/lib/kubelet/pods/c8537ca0-09da-4ad3-b1da-78551585c1ee/volumes" Apr 16 18:37:48.313423 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:48.313382 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-ed24b-7c6b74f4df-gtjnk" podUID="45d446be-c5bb-4512-b301-7036b270391e" containerName="sequence-graph-ed24b" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:37:52.613966 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:52.613923 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-78581-predictor-d6d9bf857-m87d6" podUID="996efd22-cdee-405a-8852-b2952981236a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.36:8080: connect: connection refused" Apr 16 18:37:52.705608 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:52.705560 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-89f53-predictor-646dcc4bbf-8hgx4" podUID="28d7194e-4cc1-46d4-befc-1faef9286d01" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.37:8080: connect: connection refused" Apr 16 18:37:53.313440 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:53.313400 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-ed24b-7c6b74f4df-gtjnk" podUID="45d446be-c5bb-4512-b301-7036b270391e" containerName="sequence-graph-ed24b" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:37:53.313611 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:53.313503 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/sequence-graph-ed24b-7c6b74f4df-gtjnk" Apr 16 18:37:58.313654 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:37:58.313609 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-ed24b-7c6b74f4df-gtjnk" podUID="45d446be-c5bb-4512-b301-7036b270391e" containerName="sequence-graph-ed24b" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:38:02.615002 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:02.614920 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-78581-predictor-d6d9bf857-m87d6" Apr 16 18:38:02.705183 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:02.705118 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-89f53-predictor-646dcc4bbf-8hgx4" podUID="28d7194e-4cc1-46d4-befc-1faef9286d01" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.37:8080: connect: connection refused" Apr 16 18:38:03.313360 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:03.313314 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-ed24b-7c6b74f4df-gtjnk" podUID="45d446be-c5bb-4512-b301-7036b270391e" containerName="sequence-graph-ed24b" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:38:08.314052 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:08.314008 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-ed24b-7c6b74f4df-gtjnk" podUID="45d446be-c5bb-4512-b301-7036b270391e" containerName="sequence-graph-ed24b" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:38:10.772741 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:10.772717 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-ed24b-7c6b74f4df-gtjnk" Apr 16 18:38:10.786984 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:10.786958 2571 generic.go:358] "Generic (PLEG): container finished" podID="45d446be-c5bb-4512-b301-7036b270391e" containerID="e7efa95ca5586966aad354fc6272dc12729037fa20646e74f200c8149602bcf3" exitCode=0 Apr 16 18:38:10.787099 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:10.787021 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-ed24b-7c6b74f4df-gtjnk" Apr 16 18:38:10.787099 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:10.787032 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-ed24b-7c6b74f4df-gtjnk" event={"ID":"45d446be-c5bb-4512-b301-7036b270391e","Type":"ContainerDied","Data":"e7efa95ca5586966aad354fc6272dc12729037fa20646e74f200c8149602bcf3"} Apr 16 18:38:10.787099 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:10.787069 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-ed24b-7c6b74f4df-gtjnk" event={"ID":"45d446be-c5bb-4512-b301-7036b270391e","Type":"ContainerDied","Data":"a6fe8d256e2613cbf38a1aedf6d1a0f1a2e4e8de8b0123d1f714cac3571fb67e"} Apr 16 18:38:10.787099 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:10.787088 2571 scope.go:117] "RemoveContainer" containerID="e7efa95ca5586966aad354fc6272dc12729037fa20646e74f200c8149602bcf3" Apr 16 18:38:10.797304 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:10.797271 2571 scope.go:117] "RemoveContainer" containerID="e7efa95ca5586966aad354fc6272dc12729037fa20646e74f200c8149602bcf3" Apr 16 18:38:10.797754 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:38:10.797733 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7efa95ca5586966aad354fc6272dc12729037fa20646e74f200c8149602bcf3\": container with ID starting with e7efa95ca5586966aad354fc6272dc12729037fa20646e74f200c8149602bcf3 not found: ID does not exist" containerID="e7efa95ca5586966aad354fc6272dc12729037fa20646e74f200c8149602bcf3" Apr 16 18:38:10.797853 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:10.797763 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7efa95ca5586966aad354fc6272dc12729037fa20646e74f200c8149602bcf3"} err="failed to get container status \"e7efa95ca5586966aad354fc6272dc12729037fa20646e74f200c8149602bcf3\": rpc error: code = NotFound desc = could not find container \"e7efa95ca5586966aad354fc6272dc12729037fa20646e74f200c8149602bcf3\": container with ID starting with e7efa95ca5586966aad354fc6272dc12729037fa20646e74f200c8149602bcf3 not found: ID does not exist" Apr 16 18:38:10.853784 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:10.853761 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45d446be-c5bb-4512-b301-7036b270391e-openshift-service-ca-bundle\") pod \"45d446be-c5bb-4512-b301-7036b270391e\" (UID: \"45d446be-c5bb-4512-b301-7036b270391e\") " Apr 16 18:38:10.853894 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:10.853822 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/45d446be-c5bb-4512-b301-7036b270391e-proxy-tls\") pod \"45d446be-c5bb-4512-b301-7036b270391e\" (UID: \"45d446be-c5bb-4512-b301-7036b270391e\") " Apr 16 18:38:10.854089 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:10.854068 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45d446be-c5bb-4512-b301-7036b270391e-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "45d446be-c5bb-4512-b301-7036b270391e" (UID: "45d446be-c5bb-4512-b301-7036b270391e"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 18:38:10.855790 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:10.855762 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d446be-c5bb-4512-b301-7036b270391e-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "45d446be-c5bb-4512-b301-7036b270391e" (UID: "45d446be-c5bb-4512-b301-7036b270391e"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 18:38:10.954312 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:10.954234 2571 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/45d446be-c5bb-4512-b301-7036b270391e-proxy-tls\") on node \"ip-10-0-128-59.ec2.internal\" DevicePath \"\"" Apr 16 18:38:10.954312 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:10.954262 2571 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45d446be-c5bb-4512-b301-7036b270391e-openshift-service-ca-bundle\") on node \"ip-10-0-128-59.ec2.internal\" DevicePath \"\"" Apr 16 18:38:11.107290 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:11.107260 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-ed24b-7c6b74f4df-gtjnk"] Apr 16 18:38:11.110385 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:11.110363 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-ed24b-7c6b74f4df-gtjnk"] Apr 16 18:38:12.705735 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:12.705694 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-89f53-predictor-646dcc4bbf-8hgx4" podUID="28d7194e-4cc1-46d4-befc-1faef9286d01" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.37:8080: connect: connection refused" Apr 16 18:38:12.715999 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:12.715964 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45d446be-c5bb-4512-b301-7036b270391e" path="/var/lib/kubelet/pods/45d446be-c5bb-4512-b301-7036b270391e/volumes" Apr 16 18:38:21.065578 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:21.065493 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/splitter-graph-78581-5cd4b8fdb7-tts6g"] Apr 16 18:38:21.065976 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:21.065833 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="45d446be-c5bb-4512-b301-7036b270391e" containerName="sequence-graph-ed24b" Apr 16 18:38:21.065976 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:21.065845 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d446be-c5bb-4512-b301-7036b270391e" containerName="sequence-graph-ed24b" Apr 16 18:38:21.065976 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:21.065861 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c8537ca0-09da-4ad3-b1da-78551585c1ee" containerName="kserve-container" Apr 16 18:38:21.065976 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:21.065866 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8537ca0-09da-4ad3-b1da-78551585c1ee" containerName="kserve-container" Apr 16 18:38:21.065976 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:21.065876 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c062514c-adc1-4f8f-ad51-7a656f69c3eb" containerName="ensemble-graph-df772" Apr 16 18:38:21.065976 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:21.065882 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="c062514c-adc1-4f8f-ad51-7a656f69c3eb" containerName="ensemble-graph-df772" Apr 16 18:38:21.065976 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:21.065935 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="c062514c-adc1-4f8f-ad51-7a656f69c3eb" containerName="ensemble-graph-df772" Apr 16 18:38:21.065976 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:21.065945 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="45d446be-c5bb-4512-b301-7036b270391e" containerName="sequence-graph-ed24b" Apr 16 18:38:21.065976 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:21.065952 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="c8537ca0-09da-4ad3-b1da-78551585c1ee" containerName="kserve-container" Apr 16 18:38:21.068756 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:21.068741 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-78581-5cd4b8fdb7-tts6g" Apr 16 18:38:21.070912 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:21.070889 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"splitter-graph-78581-kube-rbac-proxy-sar-config\"" Apr 16 18:38:21.071164 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:21.071150 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"openshift-service-ca.crt\"" Apr 16 18:38:21.071386 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:21.071372 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"splitter-graph-78581-serving-cert\"" Apr 16 18:38:21.081475 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:21.081451 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-78581-5cd4b8fdb7-tts6g"] Apr 16 18:38:21.122836 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:21.122805 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bae1a6a-377e-418b-8998-cd4f92f48afc-openshift-service-ca-bundle\") pod \"splitter-graph-78581-5cd4b8fdb7-tts6g\" (UID: \"5bae1a6a-377e-418b-8998-cd4f92f48afc\") " pod="kserve-ci-e2e-test/splitter-graph-78581-5cd4b8fdb7-tts6g" Apr 16 18:38:21.122836 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:21.122838 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5bae1a6a-377e-418b-8998-cd4f92f48afc-proxy-tls\") pod \"splitter-graph-78581-5cd4b8fdb7-tts6g\" (UID: \"5bae1a6a-377e-418b-8998-cd4f92f48afc\") " pod="kserve-ci-e2e-test/splitter-graph-78581-5cd4b8fdb7-tts6g" Apr 16 18:38:21.223484 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:21.223453 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bae1a6a-377e-418b-8998-cd4f92f48afc-openshift-service-ca-bundle\") pod \"splitter-graph-78581-5cd4b8fdb7-tts6g\" (UID: \"5bae1a6a-377e-418b-8998-cd4f92f48afc\") " pod="kserve-ci-e2e-test/splitter-graph-78581-5cd4b8fdb7-tts6g" Apr 16 18:38:21.223484 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:21.223487 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5bae1a6a-377e-418b-8998-cd4f92f48afc-proxy-tls\") pod \"splitter-graph-78581-5cd4b8fdb7-tts6g\" (UID: \"5bae1a6a-377e-418b-8998-cd4f92f48afc\") " pod="kserve-ci-e2e-test/splitter-graph-78581-5cd4b8fdb7-tts6g" Apr 16 18:38:21.223715 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:38:21.223618 2571 secret.go:189] Couldn't get secret kserve-ci-e2e-test/splitter-graph-78581-serving-cert: secret "splitter-graph-78581-serving-cert" not found Apr 16 18:38:21.223715 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:38:21.223696 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bae1a6a-377e-418b-8998-cd4f92f48afc-proxy-tls podName:5bae1a6a-377e-418b-8998-cd4f92f48afc nodeName:}" failed. No retries permitted until 2026-04-16 18:38:21.72367419 +0000 UTC m=+2159.552637301 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/5bae1a6a-377e-418b-8998-cd4f92f48afc-proxy-tls") pod "splitter-graph-78581-5cd4b8fdb7-tts6g" (UID: "5bae1a6a-377e-418b-8998-cd4f92f48afc") : secret "splitter-graph-78581-serving-cert" not found Apr 16 18:38:21.224221 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:21.224196 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bae1a6a-377e-418b-8998-cd4f92f48afc-openshift-service-ca-bundle\") pod \"splitter-graph-78581-5cd4b8fdb7-tts6g\" (UID: \"5bae1a6a-377e-418b-8998-cd4f92f48afc\") " pod="kserve-ci-e2e-test/splitter-graph-78581-5cd4b8fdb7-tts6g" Apr 16 18:38:21.726998 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:21.726967 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5bae1a6a-377e-418b-8998-cd4f92f48afc-proxy-tls\") pod \"splitter-graph-78581-5cd4b8fdb7-tts6g\" (UID: \"5bae1a6a-377e-418b-8998-cd4f92f48afc\") " pod="kserve-ci-e2e-test/splitter-graph-78581-5cd4b8fdb7-tts6g" Apr 16 18:38:21.729308 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:21.729289 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5bae1a6a-377e-418b-8998-cd4f92f48afc-proxy-tls\") pod \"splitter-graph-78581-5cd4b8fdb7-tts6g\" (UID: \"5bae1a6a-377e-418b-8998-cd4f92f48afc\") " pod="kserve-ci-e2e-test/splitter-graph-78581-5cd4b8fdb7-tts6g" Apr 16 18:38:21.979093 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:21.978982 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-78581-5cd4b8fdb7-tts6g" Apr 16 18:38:22.103051 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:22.103020 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-78581-5cd4b8fdb7-tts6g"] Apr 16 18:38:22.105582 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:38:22.105547 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bae1a6a_377e_418b_8998_cd4f92f48afc.slice/crio-5a3de3b9fcfdaab7ea9734a73a97a72969bf4cf5496ebd6d319205ed2bd830c7 WatchSource:0}: Error finding container 5a3de3b9fcfdaab7ea9734a73a97a72969bf4cf5496ebd6d319205ed2bd830c7: Status 404 returned error can't find the container with id 5a3de3b9fcfdaab7ea9734a73a97a72969bf4cf5496ebd6d319205ed2bd830c7 Apr 16 18:38:22.705902 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:22.705865 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-89f53-predictor-646dcc4bbf-8hgx4" podUID="28d7194e-4cc1-46d4-befc-1faef9286d01" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.37:8080: connect: connection refused" Apr 16 18:38:22.822646 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:22.822614 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-78581-5cd4b8fdb7-tts6g" event={"ID":"5bae1a6a-377e-418b-8998-cd4f92f48afc","Type":"ContainerStarted","Data":"2fd5f6a5fc622bfe847551c90bfef8e4c0d7124e517fd14f6684fac2d9cade8a"} Apr 16 18:38:22.822646 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:22.822648 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-78581-5cd4b8fdb7-tts6g" event={"ID":"5bae1a6a-377e-418b-8998-cd4f92f48afc","Type":"ContainerStarted","Data":"5a3de3b9fcfdaab7ea9734a73a97a72969bf4cf5496ebd6d319205ed2bd830c7"} Apr 16 18:38:22.822819 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:22.822754 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/splitter-graph-78581-5cd4b8fdb7-tts6g" Apr 16 18:38:22.839666 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:22.839623 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/splitter-graph-78581-5cd4b8fdb7-tts6g" podStartSLOduration=1.8396058800000001 podStartE2EDuration="1.83960588s" podCreationTimestamp="2026-04-16 18:38:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:38:22.837831553 +0000 UTC m=+2160.666794700" watchObservedRunningTime="2026-04-16 18:38:22.83960588 +0000 UTC m=+2160.668569013" Apr 16 18:38:28.831084 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:28.831056 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/splitter-graph-78581-5cd4b8fdb7-tts6g" Apr 16 18:38:31.085039 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:31.085005 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-78581-5cd4b8fdb7-tts6g"] Apr 16 18:38:31.085429 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:31.085201 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/splitter-graph-78581-5cd4b8fdb7-tts6g" podUID="5bae1a6a-377e-418b-8998-cd4f92f48afc" containerName="splitter-graph-78581" containerID="cri-o://2fd5f6a5fc622bfe847551c90bfef8e4c0d7124e517fd14f6684fac2d9cade8a" gracePeriod=30 Apr 16 18:38:31.201288 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:31.201256 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-78581-predictor-d6d9bf857-m87d6"] Apr 16 18:38:31.201492 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:31.201470 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-78581-predictor-d6d9bf857-m87d6" podUID="996efd22-cdee-405a-8852-b2952981236a" containerName="kserve-container" containerID="cri-o://26661d3aba82e96434db643afe4ee554b467e36941732252125315b591109974" gracePeriod=30 Apr 16 18:38:31.211530 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:31.211506 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-84e06-predictor-579c9cdd97-z77lb"] Apr 16 18:38:31.214574 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:31.214556 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-84e06-predictor-579c9cdd97-z77lb" Apr 16 18:38:31.220662 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:31.220638 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-84e06-predictor-579c9cdd97-z77lb"] Apr 16 18:38:31.225477 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:31.225455 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-84e06-predictor-579c9cdd97-z77lb" Apr 16 18:38:31.358430 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:31.358392 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-84e06-predictor-579c9cdd97-z77lb"] Apr 16 18:38:31.361278 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:38:31.361245 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod367f033e_09d9_4af4_bf1c_d7d5d49ba20e.slice/crio-96b05ce3961e54e23432fb17baa2c085f9074a911425f71dbb0e9920a0cf6222 WatchSource:0}: Error finding container 96b05ce3961e54e23432fb17baa2c085f9074a911425f71dbb0e9920a0cf6222: Status 404 returned error can't find the container with id 96b05ce3961e54e23432fb17baa2c085f9074a911425f71dbb0e9920a0cf6222 Apr 16 18:38:31.848351 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:31.848313 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-84e06-predictor-579c9cdd97-z77lb" event={"ID":"367f033e-09d9-4af4-bf1c-d7d5d49ba20e","Type":"ContainerStarted","Data":"5007e44c87dd2161e499742a72caa865a712f428bf0329c65cf244620987d18e"} Apr 16 18:38:31.848520 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:31.848356 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-84e06-predictor-579c9cdd97-z77lb" event={"ID":"367f033e-09d9-4af4-bf1c-d7d5d49ba20e","Type":"ContainerStarted","Data":"96b05ce3961e54e23432fb17baa2c085f9074a911425f71dbb0e9920a0cf6222"} Apr 16 18:38:31.848606 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:31.848544 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/success-200-isvc-84e06-predictor-579c9cdd97-z77lb" Apr 16 18:38:31.849731 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:31.849710 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-84e06-predictor-579c9cdd97-z77lb" podUID="367f033e-09d9-4af4-bf1c-d7d5d49ba20e" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.39:8080: connect: connection refused" Apr 16 18:38:31.861977 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:31.861930 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/success-200-isvc-84e06-predictor-579c9cdd97-z77lb" podStartSLOduration=0.861914409 podStartE2EDuration="861.914409ms" podCreationTimestamp="2026-04-16 18:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:38:31.861349654 +0000 UTC m=+2169.690312786" watchObservedRunningTime="2026-04-16 18:38:31.861914409 +0000 UTC m=+2169.690877544" Apr 16 18:38:32.613899 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:32.613851 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-78581-predictor-d6d9bf857-m87d6" podUID="996efd22-cdee-405a-8852-b2952981236a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.36:8080: connect: connection refused" Apr 16 18:38:32.706472 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:32.706445 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-89f53-predictor-646dcc4bbf-8hgx4" Apr 16 18:38:32.856488 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:32.856384 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-84e06-predictor-579c9cdd97-z77lb" podUID="367f033e-09d9-4af4-bf1c-d7d5d49ba20e" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.39:8080: connect: connection refused" Apr 16 18:38:33.829653 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:33.829579 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-78581-5cd4b8fdb7-tts6g" podUID="5bae1a6a-377e-418b-8998-cd4f92f48afc" containerName="splitter-graph-78581" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:38:34.641936 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:34.641914 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-78581-predictor-d6d9bf857-m87d6" Apr 16 18:38:34.859996 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:34.859962 2571 generic.go:358] "Generic (PLEG): container finished" podID="996efd22-cdee-405a-8852-b2952981236a" containerID="26661d3aba82e96434db643afe4ee554b467e36941732252125315b591109974" exitCode=0 Apr 16 18:38:34.859996 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:34.860000 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-78581-predictor-d6d9bf857-m87d6" event={"ID":"996efd22-cdee-405a-8852-b2952981236a","Type":"ContainerDied","Data":"26661d3aba82e96434db643afe4ee554b467e36941732252125315b591109974"} Apr 16 18:38:34.860464 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:34.860027 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-78581-predictor-d6d9bf857-m87d6" event={"ID":"996efd22-cdee-405a-8852-b2952981236a","Type":"ContainerDied","Data":"b16177dea9716e0f15392c38c5bbb9dc551d12f2eb2a964c218d70adb6ddbd5e"} Apr 16 18:38:34.860464 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:34.860027 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-78581-predictor-d6d9bf857-m87d6" Apr 16 18:38:34.860464 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:34.860041 2571 scope.go:117] "RemoveContainer" containerID="26661d3aba82e96434db643afe4ee554b467e36941732252125315b591109974" Apr 16 18:38:34.867790 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:34.867768 2571 scope.go:117] "RemoveContainer" containerID="26661d3aba82e96434db643afe4ee554b467e36941732252125315b591109974" Apr 16 18:38:34.868041 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:38:34.868021 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26661d3aba82e96434db643afe4ee554b467e36941732252125315b591109974\": container with ID starting with 26661d3aba82e96434db643afe4ee554b467e36941732252125315b591109974 not found: ID does not exist" containerID="26661d3aba82e96434db643afe4ee554b467e36941732252125315b591109974" Apr 16 18:38:34.868089 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:34.868051 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26661d3aba82e96434db643afe4ee554b467e36941732252125315b591109974"} err="failed to get container status \"26661d3aba82e96434db643afe4ee554b467e36941732252125315b591109974\": rpc error: code = NotFound desc = could not find container \"26661d3aba82e96434db643afe4ee554b467e36941732252125315b591109974\": container with ID starting with 26661d3aba82e96434db643afe4ee554b467e36941732252125315b591109974 not found: ID does not exist" Apr 16 18:38:34.874489 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:34.874468 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-78581-predictor-d6d9bf857-m87d6"] Apr 16 18:38:34.877743 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:34.877724 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-78581-predictor-d6d9bf857-m87d6"] Apr 16 18:38:36.714964 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:36.714919 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="996efd22-cdee-405a-8852-b2952981236a" path="/var/lib/kubelet/pods/996efd22-cdee-405a-8852-b2952981236a/volumes" Apr 16 18:38:38.829480 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:38.829443 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-78581-5cd4b8fdb7-tts6g" podUID="5bae1a6a-377e-418b-8998-cd4f92f48afc" containerName="splitter-graph-78581" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:38:42.854098 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:42.854053 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-84e06-predictor-579c9cdd97-z77lb" podUID="367f033e-09d9-4af4-bf1c-d7d5d49ba20e" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.39:8080: connect: connection refused" Apr 16 18:38:43.829523 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:43.829476 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-78581-5cd4b8fdb7-tts6g" podUID="5bae1a6a-377e-418b-8998-cd4f92f48afc" containerName="splitter-graph-78581" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:38:43.829712 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:43.829609 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/splitter-graph-78581-5cd4b8fdb7-tts6g" Apr 16 18:38:48.830175 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:48.830138 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-78581-5cd4b8fdb7-tts6g" podUID="5bae1a6a-377e-418b-8998-cd4f92f48afc" containerName="splitter-graph-78581" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:38:50.805026 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:50.804992 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/switch-graph-89f53-d7bd89598-qk2zv"] Apr 16 18:38:50.805376 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:50.805302 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="996efd22-cdee-405a-8852-b2952981236a" containerName="kserve-container" Apr 16 18:38:50.805376 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:50.805313 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="996efd22-cdee-405a-8852-b2952981236a" containerName="kserve-container" Apr 16 18:38:50.805376 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:50.805375 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="996efd22-cdee-405a-8852-b2952981236a" containerName="kserve-container" Apr 16 18:38:50.808286 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:50.808265 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-89f53-d7bd89598-qk2zv" Apr 16 18:38:50.810375 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:50.810351 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"switch-graph-89f53-serving-cert\"" Apr 16 18:38:50.810467 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:50.810350 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"switch-graph-89f53-kube-rbac-proxy-sar-config\"" Apr 16 18:38:50.813881 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:50.813858 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/switch-graph-89f53-d7bd89598-qk2zv"] Apr 16 18:38:50.845891 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:50.845853 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/148cab96-c158-4ff5-9f75-51c5a9d73396-openshift-service-ca-bundle\") pod \"switch-graph-89f53-d7bd89598-qk2zv\" (UID: \"148cab96-c158-4ff5-9f75-51c5a9d73396\") " pod="kserve-ci-e2e-test/switch-graph-89f53-d7bd89598-qk2zv" Apr 16 18:38:50.846025 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:50.845903 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/148cab96-c158-4ff5-9f75-51c5a9d73396-proxy-tls\") pod \"switch-graph-89f53-d7bd89598-qk2zv\" (UID: \"148cab96-c158-4ff5-9f75-51c5a9d73396\") " pod="kserve-ci-e2e-test/switch-graph-89f53-d7bd89598-qk2zv" Apr 16 18:38:50.947077 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:50.947039 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/148cab96-c158-4ff5-9f75-51c5a9d73396-proxy-tls\") pod \"switch-graph-89f53-d7bd89598-qk2zv\" (UID: \"148cab96-c158-4ff5-9f75-51c5a9d73396\") " pod="kserve-ci-e2e-test/switch-graph-89f53-d7bd89598-qk2zv" Apr 16 18:38:50.947246 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:50.947110 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/148cab96-c158-4ff5-9f75-51c5a9d73396-openshift-service-ca-bundle\") pod \"switch-graph-89f53-d7bd89598-qk2zv\" (UID: \"148cab96-c158-4ff5-9f75-51c5a9d73396\") " pod="kserve-ci-e2e-test/switch-graph-89f53-d7bd89598-qk2zv" Apr 16 18:38:50.947246 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:38:50.947183 2571 secret.go:189] Couldn't get secret kserve-ci-e2e-test/switch-graph-89f53-serving-cert: secret "switch-graph-89f53-serving-cert" not found Apr 16 18:38:50.947339 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:38:50.947260 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/148cab96-c158-4ff5-9f75-51c5a9d73396-proxy-tls podName:148cab96-c158-4ff5-9f75-51c5a9d73396 nodeName:}" failed. No retries permitted until 2026-04-16 18:38:51.447241407 +0000 UTC m=+2189.276204517 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/148cab96-c158-4ff5-9f75-51c5a9d73396-proxy-tls") pod "switch-graph-89f53-d7bd89598-qk2zv" (UID: "148cab96-c158-4ff5-9f75-51c5a9d73396") : secret "switch-graph-89f53-serving-cert" not found Apr 16 18:38:50.947710 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:50.947693 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/148cab96-c158-4ff5-9f75-51c5a9d73396-openshift-service-ca-bundle\") pod \"switch-graph-89f53-d7bd89598-qk2zv\" (UID: \"148cab96-c158-4ff5-9f75-51c5a9d73396\") " pod="kserve-ci-e2e-test/switch-graph-89f53-d7bd89598-qk2zv" Apr 16 18:38:51.451684 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:51.451638 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/148cab96-c158-4ff5-9f75-51c5a9d73396-proxy-tls\") pod \"switch-graph-89f53-d7bd89598-qk2zv\" (UID: \"148cab96-c158-4ff5-9f75-51c5a9d73396\") " pod="kserve-ci-e2e-test/switch-graph-89f53-d7bd89598-qk2zv" Apr 16 18:38:51.454240 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:51.454213 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/148cab96-c158-4ff5-9f75-51c5a9d73396-proxy-tls\") pod \"switch-graph-89f53-d7bd89598-qk2zv\" (UID: \"148cab96-c158-4ff5-9f75-51c5a9d73396\") " pod="kserve-ci-e2e-test/switch-graph-89f53-d7bd89598-qk2zv" Apr 16 18:38:51.719641 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:51.719531 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-89f53-d7bd89598-qk2zv" Apr 16 18:38:51.866634 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:51.866583 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/switch-graph-89f53-d7bd89598-qk2zv"] Apr 16 18:38:51.869737 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:38:51.869708 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod148cab96_c158_4ff5_9f75_51c5a9d73396.slice/crio-72221fd362c945f1cb0b5037aa8100a5f16bc037eccb28247f3ef7043e0ae5c7 WatchSource:0}: Error finding container 72221fd362c945f1cb0b5037aa8100a5f16bc037eccb28247f3ef7043e0ae5c7: Status 404 returned error can't find the container with id 72221fd362c945f1cb0b5037aa8100a5f16bc037eccb28247f3ef7043e0ae5c7 Apr 16 18:38:51.908792 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:51.908758 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-89f53-d7bd89598-qk2zv" event={"ID":"148cab96-c158-4ff5-9f75-51c5a9d73396","Type":"ContainerStarted","Data":"72221fd362c945f1cb0b5037aa8100a5f16bc037eccb28247f3ef7043e0ae5c7"} Apr 16 18:38:52.854027 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:52.853982 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-84e06-predictor-579c9cdd97-z77lb" podUID="367f033e-09d9-4af4-bf1c-d7d5d49ba20e" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.39:8080: connect: connection refused" Apr 16 18:38:52.913875 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:52.913838 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-89f53-d7bd89598-qk2zv" event={"ID":"148cab96-c158-4ff5-9f75-51c5a9d73396","Type":"ContainerStarted","Data":"5c0077afa9e2969772cc449b65e360f6470475ce562636a626901747c6a34ec8"} Apr 16 18:38:52.914288 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:52.913889 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/switch-graph-89f53-d7bd89598-qk2zv" Apr 16 18:38:52.929868 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:52.929825 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/switch-graph-89f53-d7bd89598-qk2zv" podStartSLOduration=2.929810969 podStartE2EDuration="2.929810969s" podCreationTimestamp="2026-04-16 18:38:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:38:52.928903085 +0000 UTC m=+2190.757866218" watchObservedRunningTime="2026-04-16 18:38:52.929810969 +0000 UTC m=+2190.758774101" Apr 16 18:38:53.829643 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:53.829576 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-78581-5cd4b8fdb7-tts6g" podUID="5bae1a6a-377e-418b-8998-cd4f92f48afc" containerName="splitter-graph-78581" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:38:58.829944 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:58.829901 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-78581-5cd4b8fdb7-tts6g" podUID="5bae1a6a-377e-418b-8998-cd4f92f48afc" containerName="splitter-graph-78581" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:38:58.922370 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:38:58.922343 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/switch-graph-89f53-d7bd89598-qk2zv" Apr 16 18:39:01.728973 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:01.728949 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-78581-5cd4b8fdb7-tts6g" Apr 16 18:39:01.840181 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:01.840147 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bae1a6a-377e-418b-8998-cd4f92f48afc-openshift-service-ca-bundle\") pod \"5bae1a6a-377e-418b-8998-cd4f92f48afc\" (UID: \"5bae1a6a-377e-418b-8998-cd4f92f48afc\") " Apr 16 18:39:01.840380 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:01.840220 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5bae1a6a-377e-418b-8998-cd4f92f48afc-proxy-tls\") pod \"5bae1a6a-377e-418b-8998-cd4f92f48afc\" (UID: \"5bae1a6a-377e-418b-8998-cd4f92f48afc\") " Apr 16 18:39:01.840521 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:01.840492 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bae1a6a-377e-418b-8998-cd4f92f48afc-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "5bae1a6a-377e-418b-8998-cd4f92f48afc" (UID: "5bae1a6a-377e-418b-8998-cd4f92f48afc"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 18:39:01.842261 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:01.842238 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bae1a6a-377e-418b-8998-cd4f92f48afc-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "5bae1a6a-377e-418b-8998-cd4f92f48afc" (UID: "5bae1a6a-377e-418b-8998-cd4f92f48afc"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 18:39:01.941226 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:01.941132 2571 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5bae1a6a-377e-418b-8998-cd4f92f48afc-proxy-tls\") on node \"ip-10-0-128-59.ec2.internal\" DevicePath \"\"" Apr 16 18:39:01.941226 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:01.941166 2571 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bae1a6a-377e-418b-8998-cd4f92f48afc-openshift-service-ca-bundle\") on node \"ip-10-0-128-59.ec2.internal\" DevicePath \"\"" Apr 16 18:39:01.945252 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:01.945221 2571 generic.go:358] "Generic (PLEG): container finished" podID="5bae1a6a-377e-418b-8998-cd4f92f48afc" containerID="2fd5f6a5fc622bfe847551c90bfef8e4c0d7124e517fd14f6684fac2d9cade8a" exitCode=0 Apr 16 18:39:01.945403 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:01.945284 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-78581-5cd4b8fdb7-tts6g" Apr 16 18:39:01.945403 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:01.945302 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-78581-5cd4b8fdb7-tts6g" event={"ID":"5bae1a6a-377e-418b-8998-cd4f92f48afc","Type":"ContainerDied","Data":"2fd5f6a5fc622bfe847551c90bfef8e4c0d7124e517fd14f6684fac2d9cade8a"} Apr 16 18:39:01.945403 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:01.945340 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-78581-5cd4b8fdb7-tts6g" event={"ID":"5bae1a6a-377e-418b-8998-cd4f92f48afc","Type":"ContainerDied","Data":"5a3de3b9fcfdaab7ea9734a73a97a72969bf4cf5496ebd6d319205ed2bd830c7"} Apr 16 18:39:01.945403 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:01.945355 2571 scope.go:117] "RemoveContainer" containerID="2fd5f6a5fc622bfe847551c90bfef8e4c0d7124e517fd14f6684fac2d9cade8a" Apr 16 18:39:01.953329 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:01.953306 2571 scope.go:117] "RemoveContainer" containerID="2fd5f6a5fc622bfe847551c90bfef8e4c0d7124e517fd14f6684fac2d9cade8a" Apr 16 18:39:01.953632 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:39:01.953612 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fd5f6a5fc622bfe847551c90bfef8e4c0d7124e517fd14f6684fac2d9cade8a\": container with ID starting with 2fd5f6a5fc622bfe847551c90bfef8e4c0d7124e517fd14f6684fac2d9cade8a not found: ID does not exist" containerID="2fd5f6a5fc622bfe847551c90bfef8e4c0d7124e517fd14f6684fac2d9cade8a" Apr 16 18:39:01.953702 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:01.953640 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fd5f6a5fc622bfe847551c90bfef8e4c0d7124e517fd14f6684fac2d9cade8a"} err="failed to get container status \"2fd5f6a5fc622bfe847551c90bfef8e4c0d7124e517fd14f6684fac2d9cade8a\": rpc error: code = NotFound desc = could not find container \"2fd5f6a5fc622bfe847551c90bfef8e4c0d7124e517fd14f6684fac2d9cade8a\": container with ID starting with 2fd5f6a5fc622bfe847551c90bfef8e4c0d7124e517fd14f6684fac2d9cade8a not found: ID does not exist" Apr 16 18:39:01.964141 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:01.964115 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-78581-5cd4b8fdb7-tts6g"] Apr 16 18:39:01.967446 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:01.967421 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-78581-5cd4b8fdb7-tts6g"] Apr 16 18:39:02.715194 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:02.715166 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bae1a6a-377e-418b-8998-cd4f92f48afc" path="/var/lib/kubelet/pods/5bae1a6a-377e-418b-8998-cd4f92f48afc/volumes" Apr 16 18:39:02.853960 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:02.853919 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-84e06-predictor-579c9cdd97-z77lb" podUID="367f033e-09d9-4af4-bf1c-d7d5d49ba20e" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.39:8080: connect: connection refused" Apr 16 18:39:12.854388 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:12.854337 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-84e06-predictor-579c9cdd97-z77lb" podUID="367f033e-09d9-4af4-bf1c-d7d5d49ba20e" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.39:8080: connect: connection refused" Apr 16 18:39:22.854606 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:22.854559 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/success-200-isvc-84e06-predictor-579c9cdd97-z77lb" Apr 16 18:39:41.288184 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:41.288098 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/splitter-graph-84e06-7f457f648b-sqvbh"] Apr 16 18:39:41.288620 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:41.288560 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5bae1a6a-377e-418b-8998-cd4f92f48afc" containerName="splitter-graph-78581" Apr 16 18:39:41.288620 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:41.288578 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bae1a6a-377e-418b-8998-cd4f92f48afc" containerName="splitter-graph-78581" Apr 16 18:39:41.288747 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:41.288674 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="5bae1a6a-377e-418b-8998-cd4f92f48afc" containerName="splitter-graph-78581" Apr 16 18:39:41.292900 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:41.292879 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-84e06-7f457f648b-sqvbh" Apr 16 18:39:41.294918 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:41.294896 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"splitter-graph-84e06-kube-rbac-proxy-sar-config\"" Apr 16 18:39:41.295070 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:41.295048 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"splitter-graph-84e06-serving-cert\"" Apr 16 18:39:41.298163 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:41.298141 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-84e06-7f457f648b-sqvbh"] Apr 16 18:39:41.347718 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:41.347683 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da4fef60-9248-459d-8b42-5ccd6f77aa0d-openshift-service-ca-bundle\") pod \"splitter-graph-84e06-7f457f648b-sqvbh\" (UID: \"da4fef60-9248-459d-8b42-5ccd6f77aa0d\") " pod="kserve-ci-e2e-test/splitter-graph-84e06-7f457f648b-sqvbh" Apr 16 18:39:41.347884 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:41.347801 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/da4fef60-9248-459d-8b42-5ccd6f77aa0d-proxy-tls\") pod \"splitter-graph-84e06-7f457f648b-sqvbh\" (UID: \"da4fef60-9248-459d-8b42-5ccd6f77aa0d\") " pod="kserve-ci-e2e-test/splitter-graph-84e06-7f457f648b-sqvbh" Apr 16 18:39:41.448516 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:41.448481 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da4fef60-9248-459d-8b42-5ccd6f77aa0d-openshift-service-ca-bundle\") pod \"splitter-graph-84e06-7f457f648b-sqvbh\" (UID: \"da4fef60-9248-459d-8b42-5ccd6f77aa0d\") " pod="kserve-ci-e2e-test/splitter-graph-84e06-7f457f648b-sqvbh" Apr 16 18:39:41.448695 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:41.448565 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/da4fef60-9248-459d-8b42-5ccd6f77aa0d-proxy-tls\") pod \"splitter-graph-84e06-7f457f648b-sqvbh\" (UID: \"da4fef60-9248-459d-8b42-5ccd6f77aa0d\") " pod="kserve-ci-e2e-test/splitter-graph-84e06-7f457f648b-sqvbh" Apr 16 18:39:41.449109 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:41.449090 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da4fef60-9248-459d-8b42-5ccd6f77aa0d-openshift-service-ca-bundle\") pod \"splitter-graph-84e06-7f457f648b-sqvbh\" (UID: \"da4fef60-9248-459d-8b42-5ccd6f77aa0d\") " pod="kserve-ci-e2e-test/splitter-graph-84e06-7f457f648b-sqvbh" Apr 16 18:39:41.451002 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:41.450985 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/da4fef60-9248-459d-8b42-5ccd6f77aa0d-proxy-tls\") pod \"splitter-graph-84e06-7f457f648b-sqvbh\" (UID: \"da4fef60-9248-459d-8b42-5ccd6f77aa0d\") " pod="kserve-ci-e2e-test/splitter-graph-84e06-7f457f648b-sqvbh" Apr 16 18:39:41.603403 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:41.603368 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-84e06-7f457f648b-sqvbh" Apr 16 18:39:41.721276 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:41.721122 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-84e06-7f457f648b-sqvbh"] Apr 16 18:39:41.724084 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:39:41.724059 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda4fef60_9248_459d_8b42_5ccd6f77aa0d.slice/crio-eb17dc6f92c8f96dbf339e11644d7233bf6e355c44cd55c8a10908be01fbd4c4 WatchSource:0}: Error finding container eb17dc6f92c8f96dbf339e11644d7233bf6e355c44cd55c8a10908be01fbd4c4: Status 404 returned error can't find the container with id eb17dc6f92c8f96dbf339e11644d7233bf6e355c44cd55c8a10908be01fbd4c4 Apr 16 18:39:42.062867 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:42.062785 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-84e06-7f457f648b-sqvbh" event={"ID":"da4fef60-9248-459d-8b42-5ccd6f77aa0d","Type":"ContainerStarted","Data":"7a5e96a12f798b5e0514ea387130fb30111054d50f8211e6f87119d4ba188349"} Apr 16 18:39:42.062867 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:42.062826 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-84e06-7f457f648b-sqvbh" event={"ID":"da4fef60-9248-459d-8b42-5ccd6f77aa0d","Type":"ContainerStarted","Data":"eb17dc6f92c8f96dbf339e11644d7233bf6e355c44cd55c8a10908be01fbd4c4"} Apr 16 18:39:42.063042 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:42.062909 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/splitter-graph-84e06-7f457f648b-sqvbh" Apr 16 18:39:42.077622 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:42.077537 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/splitter-graph-84e06-7f457f648b-sqvbh" podStartSLOduration=1.077522986 podStartE2EDuration="1.077522986s" podCreationTimestamp="2026-04-16 18:39:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:39:42.076940248 +0000 UTC m=+2239.905903382" watchObservedRunningTime="2026-04-16 18:39:42.077522986 +0000 UTC m=+2239.906486096" Apr 16 18:39:48.070501 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:39:48.070465 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/splitter-graph-84e06-7f457f648b-sqvbh" Apr 16 18:47:56.021981 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:47:56.021949 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-84e06-7f457f648b-sqvbh"] Apr 16 18:47:56.024401 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:47:56.022196 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/splitter-graph-84e06-7f457f648b-sqvbh" podUID="da4fef60-9248-459d-8b42-5ccd6f77aa0d" containerName="splitter-graph-84e06" containerID="cri-o://7a5e96a12f798b5e0514ea387130fb30111054d50f8211e6f87119d4ba188349" gracePeriod=30 Apr 16 18:47:56.113490 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:47:56.113451 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-84e06-predictor-579c9cdd97-z77lb"] Apr 16 18:47:56.113768 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:47:56.113744 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-84e06-predictor-579c9cdd97-z77lb" podUID="367f033e-09d9-4af4-bf1c-d7d5d49ba20e" containerName="kserve-container" containerID="cri-o://5007e44c87dd2161e499742a72caa865a712f428bf0329c65cf244620987d18e" gracePeriod=30 Apr 16 18:47:58.068932 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:47:58.068889 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-84e06-7f457f648b-sqvbh" podUID="da4fef60-9248-459d-8b42-5ccd6f77aa0d" containerName="splitter-graph-84e06" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:47:59.248992 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:47:59.248969 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-84e06-predictor-579c9cdd97-z77lb" Apr 16 18:47:59.465706 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:47:59.465620 2571 generic.go:358] "Generic (PLEG): container finished" podID="367f033e-09d9-4af4-bf1c-d7d5d49ba20e" containerID="5007e44c87dd2161e499742a72caa865a712f428bf0329c65cf244620987d18e" exitCode=0 Apr 16 18:47:59.465706 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:47:59.465670 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-84e06-predictor-579c9cdd97-z77lb" event={"ID":"367f033e-09d9-4af4-bf1c-d7d5d49ba20e","Type":"ContainerDied","Data":"5007e44c87dd2161e499742a72caa865a712f428bf0329c65cf244620987d18e"} Apr 16 18:47:59.465706 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:47:59.465708 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-84e06-predictor-579c9cdd97-z77lb" event={"ID":"367f033e-09d9-4af4-bf1c-d7d5d49ba20e","Type":"ContainerDied","Data":"96b05ce3961e54e23432fb17baa2c085f9074a911425f71dbb0e9920a0cf6222"} Apr 16 18:47:59.465911 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:47:59.465723 2571 scope.go:117] "RemoveContainer" containerID="5007e44c87dd2161e499742a72caa865a712f428bf0329c65cf244620987d18e" Apr 16 18:47:59.465911 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:47:59.465685 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-84e06-predictor-579c9cdd97-z77lb" Apr 16 18:47:59.473856 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:47:59.473841 2571 scope.go:117] "RemoveContainer" containerID="5007e44c87dd2161e499742a72caa865a712f428bf0329c65cf244620987d18e" Apr 16 18:47:59.474106 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:47:59.474087 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5007e44c87dd2161e499742a72caa865a712f428bf0329c65cf244620987d18e\": container with ID starting with 5007e44c87dd2161e499742a72caa865a712f428bf0329c65cf244620987d18e not found: ID does not exist" containerID="5007e44c87dd2161e499742a72caa865a712f428bf0329c65cf244620987d18e" Apr 16 18:47:59.474175 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:47:59.474117 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5007e44c87dd2161e499742a72caa865a712f428bf0329c65cf244620987d18e"} err="failed to get container status \"5007e44c87dd2161e499742a72caa865a712f428bf0329c65cf244620987d18e\": rpc error: code = NotFound desc = could not find container \"5007e44c87dd2161e499742a72caa865a712f428bf0329c65cf244620987d18e\": container with ID starting with 5007e44c87dd2161e499742a72caa865a712f428bf0329c65cf244620987d18e not found: ID does not exist" Apr 16 18:47:59.485738 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:47:59.485713 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-84e06-predictor-579c9cdd97-z77lb"] Apr 16 18:47:59.488967 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:47:59.488947 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-84e06-predictor-579c9cdd97-z77lb"] Apr 16 18:48:00.714880 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:48:00.714848 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="367f033e-09d9-4af4-bf1c-d7d5d49ba20e" path="/var/lib/kubelet/pods/367f033e-09d9-4af4-bf1c-d7d5d49ba20e/volumes" Apr 16 18:48:03.069776 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:48:03.069739 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-84e06-7f457f648b-sqvbh" podUID="da4fef60-9248-459d-8b42-5ccd6f77aa0d" containerName="splitter-graph-84e06" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:48:08.069994 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:48:08.069952 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-84e06-7f457f648b-sqvbh" podUID="da4fef60-9248-459d-8b42-5ccd6f77aa0d" containerName="splitter-graph-84e06" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:48:08.070370 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:48:08.070096 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/splitter-graph-84e06-7f457f648b-sqvbh" Apr 16 18:48:13.070013 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:48:13.069969 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-84e06-7f457f648b-sqvbh" podUID="da4fef60-9248-459d-8b42-5ccd6f77aa0d" containerName="splitter-graph-84e06" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:48:18.069953 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:48:18.069915 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-84e06-7f457f648b-sqvbh" podUID="da4fef60-9248-459d-8b42-5ccd6f77aa0d" containerName="splitter-graph-84e06" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:48:23.069573 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:48:23.069538 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-84e06-7f457f648b-sqvbh" podUID="da4fef60-9248-459d-8b42-5ccd6f77aa0d" containerName="splitter-graph-84e06" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:48:26.540365 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:48:26.540337 2571 generic.go:358] "Generic (PLEG): container finished" podID="da4fef60-9248-459d-8b42-5ccd6f77aa0d" containerID="7a5e96a12f798b5e0514ea387130fb30111054d50f8211e6f87119d4ba188349" exitCode=0 Apr 16 18:48:26.540702 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:48:26.540389 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-84e06-7f457f648b-sqvbh" event={"ID":"da4fef60-9248-459d-8b42-5ccd6f77aa0d","Type":"ContainerDied","Data":"7a5e96a12f798b5e0514ea387130fb30111054d50f8211e6f87119d4ba188349"} Apr 16 18:48:26.662944 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:48:26.662921 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-84e06-7f457f648b-sqvbh" Apr 16 18:48:26.794579 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:48:26.794491 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/da4fef60-9248-459d-8b42-5ccd6f77aa0d-proxy-tls\") pod \"da4fef60-9248-459d-8b42-5ccd6f77aa0d\" (UID: \"da4fef60-9248-459d-8b42-5ccd6f77aa0d\") " Apr 16 18:48:26.794579 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:48:26.794549 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da4fef60-9248-459d-8b42-5ccd6f77aa0d-openshift-service-ca-bundle\") pod \"da4fef60-9248-459d-8b42-5ccd6f77aa0d\" (UID: \"da4fef60-9248-459d-8b42-5ccd6f77aa0d\") " Apr 16 18:48:26.794924 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:48:26.794898 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da4fef60-9248-459d-8b42-5ccd6f77aa0d-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "da4fef60-9248-459d-8b42-5ccd6f77aa0d" (UID: "da4fef60-9248-459d-8b42-5ccd6f77aa0d"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 18:48:26.796526 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:48:26.796500 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da4fef60-9248-459d-8b42-5ccd6f77aa0d-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "da4fef60-9248-459d-8b42-5ccd6f77aa0d" (UID: "da4fef60-9248-459d-8b42-5ccd6f77aa0d"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 18:48:26.895840 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:48:26.895806 2571 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/da4fef60-9248-459d-8b42-5ccd6f77aa0d-proxy-tls\") on node \"ip-10-0-128-59.ec2.internal\" DevicePath \"\"" Apr 16 18:48:26.895840 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:48:26.895838 2571 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da4fef60-9248-459d-8b42-5ccd6f77aa0d-openshift-service-ca-bundle\") on node \"ip-10-0-128-59.ec2.internal\" DevicePath \"\"" Apr 16 18:48:27.544016 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:48:27.543986 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-84e06-7f457f648b-sqvbh" event={"ID":"da4fef60-9248-459d-8b42-5ccd6f77aa0d","Type":"ContainerDied","Data":"eb17dc6f92c8f96dbf339e11644d7233bf6e355c44cd55c8a10908be01fbd4c4"} Apr 16 18:48:27.544016 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:48:27.543993 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-84e06-7f457f648b-sqvbh" Apr 16 18:48:27.544610 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:48:27.544029 2571 scope.go:117] "RemoveContainer" containerID="7a5e96a12f798b5e0514ea387130fb30111054d50f8211e6f87119d4ba188349" Apr 16 18:48:27.566367 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:48:27.566343 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-84e06-7f457f648b-sqvbh"] Apr 16 18:48:27.567570 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:48:27.567548 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-84e06-7f457f648b-sqvbh"] Apr 16 18:48:28.715280 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:48:28.715250 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da4fef60-9248-459d-8b42-5ccd6f77aa0d" path="/var/lib/kubelet/pods/da4fef60-9248-459d-8b42-5ccd6f77aa0d/volumes" Apr 16 18:55:10.239324 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:10.239285 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/switch-graph-89f53-d7bd89598-qk2zv"] Apr 16 18:55:10.241743 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:10.239519 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/switch-graph-89f53-d7bd89598-qk2zv" podUID="148cab96-c158-4ff5-9f75-51c5a9d73396" containerName="switch-graph-89f53" containerID="cri-o://5c0077afa9e2969772cc449b65e360f6470475ce562636a626901747c6a34ec8" gracePeriod=30 Apr 16 18:55:10.379160 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:10.379129 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-89f53-predictor-646dcc4bbf-8hgx4"] Apr 16 18:55:10.379373 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:10.379352 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/success-200-isvc-89f53-predictor-646dcc4bbf-8hgx4" podUID="28d7194e-4cc1-46d4-befc-1faef9286d01" containerName="kserve-container" containerID="cri-o://a83b03e6f408a702be5fd57d4390a257702306a3046b14404577c1e764a67574" gracePeriod=30 Apr 16 18:55:12.705911 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:12.705870 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/success-200-isvc-89f53-predictor-646dcc4bbf-8hgx4" podUID="28d7194e-4cc1-46d4-befc-1faef9286d01" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.37:8080: connect: connection refused" Apr 16 18:55:13.429389 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:13.429366 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-89f53-predictor-646dcc4bbf-8hgx4" Apr 16 18:55:13.676314 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:13.676282 2571 generic.go:358] "Generic (PLEG): container finished" podID="28d7194e-4cc1-46d4-befc-1faef9286d01" containerID="a83b03e6f408a702be5fd57d4390a257702306a3046b14404577c1e764a67574" exitCode=0 Apr 16 18:55:13.676524 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:13.676351 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-89f53-predictor-646dcc4bbf-8hgx4" event={"ID":"28d7194e-4cc1-46d4-befc-1faef9286d01","Type":"ContainerDied","Data":"a83b03e6f408a702be5fd57d4390a257702306a3046b14404577c1e764a67574"} Apr 16 18:55:13.676524 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:13.676360 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/success-200-isvc-89f53-predictor-646dcc4bbf-8hgx4" Apr 16 18:55:13.676524 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:13.676378 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/success-200-isvc-89f53-predictor-646dcc4bbf-8hgx4" event={"ID":"28d7194e-4cc1-46d4-befc-1faef9286d01","Type":"ContainerDied","Data":"c33a2eca72bfcdc7a2fe65ab3f50a69e79667ddb9301eda58385b19833c5b1e9"} Apr 16 18:55:13.676524 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:13.676394 2571 scope.go:117] "RemoveContainer" containerID="a83b03e6f408a702be5fd57d4390a257702306a3046b14404577c1e764a67574" Apr 16 18:55:13.685218 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:13.685198 2571 scope.go:117] "RemoveContainer" containerID="a83b03e6f408a702be5fd57d4390a257702306a3046b14404577c1e764a67574" Apr 16 18:55:13.685536 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:55:13.685517 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a83b03e6f408a702be5fd57d4390a257702306a3046b14404577c1e764a67574\": container with ID starting with a83b03e6f408a702be5fd57d4390a257702306a3046b14404577c1e764a67574 not found: ID does not exist" containerID="a83b03e6f408a702be5fd57d4390a257702306a3046b14404577c1e764a67574" Apr 16 18:55:13.685617 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:13.685548 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a83b03e6f408a702be5fd57d4390a257702306a3046b14404577c1e764a67574"} err="failed to get container status \"a83b03e6f408a702be5fd57d4390a257702306a3046b14404577c1e764a67574\": rpc error: code = NotFound desc = could not find container \"a83b03e6f408a702be5fd57d4390a257702306a3046b14404577c1e764a67574\": container with ID starting with a83b03e6f408a702be5fd57d4390a257702306a3046b14404577c1e764a67574 not found: ID does not exist" Apr 16 18:55:13.696620 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:13.696580 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-89f53-predictor-646dcc4bbf-8hgx4"] Apr 16 18:55:13.700481 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:13.700455 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/success-200-isvc-89f53-predictor-646dcc4bbf-8hgx4"] Apr 16 18:55:13.921242 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:13.921205 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-89f53-d7bd89598-qk2zv" podUID="148cab96-c158-4ff5-9f75-51c5a9d73396" containerName="switch-graph-89f53" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:55:14.713821 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:14.713789 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28d7194e-4cc1-46d4-befc-1faef9286d01" path="/var/lib/kubelet/pods/28d7194e-4cc1-46d4-befc-1faef9286d01/volumes" Apr 16 18:55:18.920552 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:18.920517 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-89f53-d7bd89598-qk2zv" podUID="148cab96-c158-4ff5-9f75-51c5a9d73396" containerName="switch-graph-89f53" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:55:23.920831 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:23.920789 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-89f53-d7bd89598-qk2zv" podUID="148cab96-c158-4ff5-9f75-51c5a9d73396" containerName="switch-graph-89f53" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:55:23.921204 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:23.920884 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/switch-graph-89f53-d7bd89598-qk2zv" Apr 16 18:55:25.259103 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:25.259072 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-89f53-d7bd89598-qk2zv_148cab96-c158-4ff5-9f75-51c5a9d73396/switch-graph-89f53/0.log" Apr 16 18:55:26.058135 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:26.058106 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-89f53-d7bd89598-qk2zv_148cab96-c158-4ff5-9f75-51c5a9d73396/switch-graph-89f53/0.log" Apr 16 18:55:26.851252 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:26.851224 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-89f53-d7bd89598-qk2zv_148cab96-c158-4ff5-9f75-51c5a9d73396/switch-graph-89f53/0.log" Apr 16 18:55:27.586857 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:27.586833 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-89f53-d7bd89598-qk2zv_148cab96-c158-4ff5-9f75-51c5a9d73396/switch-graph-89f53/0.log" Apr 16 18:55:28.381636 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:28.381607 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-89f53-d7bd89598-qk2zv_148cab96-c158-4ff5-9f75-51c5a9d73396/switch-graph-89f53/0.log" Apr 16 18:55:28.921273 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:28.921239 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-89f53-d7bd89598-qk2zv" podUID="148cab96-c158-4ff5-9f75-51c5a9d73396" containerName="switch-graph-89f53" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:55:29.112267 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:29.112241 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-89f53-d7bd89598-qk2zv_148cab96-c158-4ff5-9f75-51c5a9d73396/switch-graph-89f53/0.log" Apr 16 18:55:29.840730 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:29.840697 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-89f53-d7bd89598-qk2zv_148cab96-c158-4ff5-9f75-51c5a9d73396/switch-graph-89f53/0.log" Apr 16 18:55:30.603128 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:30.603094 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-89f53-d7bd89598-qk2zv_148cab96-c158-4ff5-9f75-51c5a9d73396/switch-graph-89f53/0.log" Apr 16 18:55:31.393161 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:31.393133 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-89f53-d7bd89598-qk2zv_148cab96-c158-4ff5-9f75-51c5a9d73396/switch-graph-89f53/0.log" Apr 16 18:55:32.144212 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:32.144186 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-89f53-d7bd89598-qk2zv_148cab96-c158-4ff5-9f75-51c5a9d73396/switch-graph-89f53/0.log" Apr 16 18:55:32.899603 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:32.899557 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-89f53-d7bd89598-qk2zv_148cab96-c158-4ff5-9f75-51c5a9d73396/switch-graph-89f53/0.log" Apr 16 18:55:33.709339 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:33.709311 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-89f53-d7bd89598-qk2zv_148cab96-c158-4ff5-9f75-51c5a9d73396/switch-graph-89f53/0.log" Apr 16 18:55:33.920667 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:33.920632 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-89f53-d7bd89598-qk2zv" podUID="148cab96-c158-4ff5-9f75-51c5a9d73396" containerName="switch-graph-89f53" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:55:38.883378 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:38.883345 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_global-pull-secret-syncer-h4qv7_263d1823-fcd5-4e8d-a12b-f7850e915d71/global-pull-secret-syncer/0.log" Apr 16 18:55:38.920865 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:38.920830 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-89f53-d7bd89598-qk2zv" podUID="148cab96-c158-4ff5-9f75-51c5a9d73396" containerName="switch-graph-89f53" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:55:39.024126 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:39.024086 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_konnectivity-agent-9s2wr_a0c27f7e-2a1f-463d-bcfa-0c3f56beb85f/konnectivity-agent/0.log" Apr 16 18:55:39.043566 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:39.043519 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_kube-apiserver-proxy-ip-10-0-128-59.ec2.internal_b23767064b3798a2c7472b4227e16d3a/haproxy/0.log" Apr 16 18:55:40.379104 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:40.379083 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-89f53-d7bd89598-qk2zv" Apr 16 18:55:40.400077 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:40.400050 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/148cab96-c158-4ff5-9f75-51c5a9d73396-openshift-service-ca-bundle\") pod \"148cab96-c158-4ff5-9f75-51c5a9d73396\" (UID: \"148cab96-c158-4ff5-9f75-51c5a9d73396\") " Apr 16 18:55:40.400244 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:40.400091 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/148cab96-c158-4ff5-9f75-51c5a9d73396-proxy-tls\") pod \"148cab96-c158-4ff5-9f75-51c5a9d73396\" (UID: \"148cab96-c158-4ff5-9f75-51c5a9d73396\") " Apr 16 18:55:40.400466 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:40.400440 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/148cab96-c158-4ff5-9f75-51c5a9d73396-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "148cab96-c158-4ff5-9f75-51c5a9d73396" (UID: "148cab96-c158-4ff5-9f75-51c5a9d73396"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 18:55:40.402129 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:40.402102 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/148cab96-c158-4ff5-9f75-51c5a9d73396-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "148cab96-c158-4ff5-9f75-51c5a9d73396" (UID: "148cab96-c158-4ff5-9f75-51c5a9d73396"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 18:55:40.500911 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:40.500834 2571 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/148cab96-c158-4ff5-9f75-51c5a9d73396-openshift-service-ca-bundle\") on node \"ip-10-0-128-59.ec2.internal\" DevicePath \"\"" Apr 16 18:55:40.500911 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:40.500864 2571 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/148cab96-c158-4ff5-9f75-51c5a9d73396-proxy-tls\") on node \"ip-10-0-128-59.ec2.internal\" DevicePath \"\"" Apr 16 18:55:40.749151 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:40.749118 2571 generic.go:358] "Generic (PLEG): container finished" podID="148cab96-c158-4ff5-9f75-51c5a9d73396" containerID="5c0077afa9e2969772cc449b65e360f6470475ce562636a626901747c6a34ec8" exitCode=0 Apr 16 18:55:40.749302 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:40.749174 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-89f53-d7bd89598-qk2zv" Apr 16 18:55:40.749302 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:40.749190 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-89f53-d7bd89598-qk2zv" event={"ID":"148cab96-c158-4ff5-9f75-51c5a9d73396","Type":"ContainerDied","Data":"5c0077afa9e2969772cc449b65e360f6470475ce562636a626901747c6a34ec8"} Apr 16 18:55:40.749302 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:40.749221 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-89f53-d7bd89598-qk2zv" event={"ID":"148cab96-c158-4ff5-9f75-51c5a9d73396","Type":"ContainerDied","Data":"72221fd362c945f1cb0b5037aa8100a5f16bc037eccb28247f3ef7043e0ae5c7"} Apr 16 18:55:40.749302 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:40.749237 2571 scope.go:117] "RemoveContainer" containerID="5c0077afa9e2969772cc449b65e360f6470475ce562636a626901747c6a34ec8" Apr 16 18:55:40.756681 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:40.756642 2571 scope.go:117] "RemoveContainer" containerID="5c0077afa9e2969772cc449b65e360f6470475ce562636a626901747c6a34ec8" Apr 16 18:55:40.756951 ip-10-0-128-59 kubenswrapper[2571]: E0416 18:55:40.756933 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c0077afa9e2969772cc449b65e360f6470475ce562636a626901747c6a34ec8\": container with ID starting with 5c0077afa9e2969772cc449b65e360f6470475ce562636a626901747c6a34ec8 not found: ID does not exist" containerID="5c0077afa9e2969772cc449b65e360f6470475ce562636a626901747c6a34ec8" Apr 16 18:55:40.756998 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:40.756959 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c0077afa9e2969772cc449b65e360f6470475ce562636a626901747c6a34ec8"} err="failed to get container status \"5c0077afa9e2969772cc449b65e360f6470475ce562636a626901747c6a34ec8\": rpc error: code = NotFound desc = could not find container \"5c0077afa9e2969772cc449b65e360f6470475ce562636a626901747c6a34ec8\": container with ID starting with 5c0077afa9e2969772cc449b65e360f6470475ce562636a626901747c6a34ec8 not found: ID does not exist" Apr 16 18:55:40.767685 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:40.767659 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/switch-graph-89f53-d7bd89598-qk2zv"] Apr 16 18:55:40.775922 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:40.775900 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/switch-graph-89f53-d7bd89598-qk2zv"] Apr 16 18:55:42.714519 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:42.714444 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="148cab96-c158-4ff5-9f75-51c5a9d73396" path="/var/lib/kubelet/pods/148cab96-c158-4ff5-9f75-51c5a9d73396/volumes" Apr 16 18:55:42.883293 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:42.883266 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-t25v4_7d53f165-c7b2-4238-a1ac-17e102839778/node-exporter/0.log" Apr 16 18:55:42.904038 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:42.904016 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-t25v4_7d53f165-c7b2-4238-a1ac-17e102839778/kube-rbac-proxy/0.log" Apr 16 18:55:42.928179 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:42.928158 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-t25v4_7d53f165-c7b2-4238-a1ac-17e102839778/init-textfile/0.log" Apr 16 18:55:43.230069 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:43.230035 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-78f957474d-9p4bb_8a0d988b-3278-4a48-b564-cd21c0da8eec/prometheus-operator/0.log" Apr 16 18:55:43.246781 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:43.246748 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-78f957474d-9p4bb_8a0d988b-3278-4a48-b564-cd21c0da8eec/kube-rbac-proxy/0.log" Apr 16 18:55:43.371258 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:43.371233 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-6f9ccd9bf9-sxcdp_025ecd6d-8672-40e5-bcff-7d12a5c16dca/thanos-query/0.log" Apr 16 18:55:43.395799 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:43.395774 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-6f9ccd9bf9-sxcdp_025ecd6d-8672-40e5-bcff-7d12a5c16dca/kube-rbac-proxy-web/0.log" Apr 16 18:55:43.414703 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:43.414681 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-6f9ccd9bf9-sxcdp_025ecd6d-8672-40e5-bcff-7d12a5c16dca/kube-rbac-proxy/0.log" Apr 16 18:55:43.434329 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:43.434312 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-6f9ccd9bf9-sxcdp_025ecd6d-8672-40e5-bcff-7d12a5c16dca/prom-label-proxy/0.log" Apr 16 18:55:43.454485 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:43.454462 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-6f9ccd9bf9-sxcdp_025ecd6d-8672-40e5-bcff-7d12a5c16dca/kube-rbac-proxy-rules/0.log" Apr 16 18:55:43.474793 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:43.474772 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-6f9ccd9bf9-sxcdp_025ecd6d-8672-40e5-bcff-7d12a5c16dca/kube-rbac-proxy-metrics/0.log" Apr 16 18:55:46.099178 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.099144 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vt288/perf-node-gather-daemonset-zlfdc"] Apr 16 18:55:46.099559 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.099425 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="28d7194e-4cc1-46d4-befc-1faef9286d01" containerName="kserve-container" Apr 16 18:55:46.099559 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.099435 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="28d7194e-4cc1-46d4-befc-1faef9286d01" containerName="kserve-container" Apr 16 18:55:46.099559 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.099443 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="148cab96-c158-4ff5-9f75-51c5a9d73396" containerName="switch-graph-89f53" Apr 16 18:55:46.099559 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.099450 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="148cab96-c158-4ff5-9f75-51c5a9d73396" containerName="switch-graph-89f53" Apr 16 18:55:46.099559 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.099460 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="367f033e-09d9-4af4-bf1c-d7d5d49ba20e" containerName="kserve-container" Apr 16 18:55:46.099559 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.099466 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="367f033e-09d9-4af4-bf1c-d7d5d49ba20e" containerName="kserve-container" Apr 16 18:55:46.099559 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.099481 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="da4fef60-9248-459d-8b42-5ccd6f77aa0d" containerName="splitter-graph-84e06" Apr 16 18:55:46.099559 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.099487 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="da4fef60-9248-459d-8b42-5ccd6f77aa0d" containerName="splitter-graph-84e06" Apr 16 18:55:46.099559 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.099536 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="148cab96-c158-4ff5-9f75-51c5a9d73396" containerName="switch-graph-89f53" Apr 16 18:55:46.099559 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.099544 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="367f033e-09d9-4af4-bf1c-d7d5d49ba20e" containerName="kserve-container" Apr 16 18:55:46.099559 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.099551 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="da4fef60-9248-459d-8b42-5ccd6f77aa0d" containerName="splitter-graph-84e06" Apr 16 18:55:46.099559 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.099560 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="28d7194e-4cc1-46d4-befc-1faef9286d01" containerName="kserve-container" Apr 16 18:55:46.103804 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.103783 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vt288/perf-node-gather-daemonset-zlfdc" Apr 16 18:55:46.105874 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.105853 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-vt288\"/\"default-dockercfg-nbxpr\"" Apr 16 18:55:46.105995 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.105893 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-vt288\"/\"kube-root-ca.crt\"" Apr 16 18:55:46.106513 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.106497 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-vt288\"/\"openshift-service-ca.crt\"" Apr 16 18:55:46.112033 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.112005 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vt288/perf-node-gather-daemonset-zlfdc"] Apr 16 18:55:46.140518 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.140493 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2b9bf38-373a-42c4-bc7d-4827c3631944-lib-modules\") pod \"perf-node-gather-daemonset-zlfdc\" (UID: \"b2b9bf38-373a-42c4-bc7d-4827c3631944\") " pod="openshift-must-gather-vt288/perf-node-gather-daemonset-zlfdc" Apr 16 18:55:46.140661 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.140524 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b2b9bf38-373a-42c4-bc7d-4827c3631944-sys\") pod \"perf-node-gather-daemonset-zlfdc\" (UID: \"b2b9bf38-373a-42c4-bc7d-4827c3631944\") " pod="openshift-must-gather-vt288/perf-node-gather-daemonset-zlfdc" Apr 16 18:55:46.140734 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.140669 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/b2b9bf38-373a-42c4-bc7d-4827c3631944-proc\") pod \"perf-node-gather-daemonset-zlfdc\" (UID: \"b2b9bf38-373a-42c4-bc7d-4827c3631944\") " pod="openshift-must-gather-vt288/perf-node-gather-daemonset-zlfdc" Apr 16 18:55:46.140734 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.140715 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/b2b9bf38-373a-42c4-bc7d-4827c3631944-podres\") pod \"perf-node-gather-daemonset-zlfdc\" (UID: \"b2b9bf38-373a-42c4-bc7d-4827c3631944\") " pod="openshift-must-gather-vt288/perf-node-gather-daemonset-zlfdc" Apr 16 18:55:46.140824 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.140767 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2zc8\" (UniqueName: \"kubernetes.io/projected/b2b9bf38-373a-42c4-bc7d-4827c3631944-kube-api-access-d2zc8\") pod \"perf-node-gather-daemonset-zlfdc\" (UID: \"b2b9bf38-373a-42c4-bc7d-4827c3631944\") " pod="openshift-must-gather-vt288/perf-node-gather-daemonset-zlfdc" Apr 16 18:55:46.241821 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.241790 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/b2b9bf38-373a-42c4-bc7d-4827c3631944-proc\") pod \"perf-node-gather-daemonset-zlfdc\" (UID: \"b2b9bf38-373a-42c4-bc7d-4827c3631944\") " pod="openshift-must-gather-vt288/perf-node-gather-daemonset-zlfdc" Apr 16 18:55:46.241959 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.241829 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/b2b9bf38-373a-42c4-bc7d-4827c3631944-podres\") pod \"perf-node-gather-daemonset-zlfdc\" (UID: \"b2b9bf38-373a-42c4-bc7d-4827c3631944\") " pod="openshift-must-gather-vt288/perf-node-gather-daemonset-zlfdc" Apr 16 18:55:46.241959 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.241852 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d2zc8\" (UniqueName: \"kubernetes.io/projected/b2b9bf38-373a-42c4-bc7d-4827c3631944-kube-api-access-d2zc8\") pod \"perf-node-gather-daemonset-zlfdc\" (UID: \"b2b9bf38-373a-42c4-bc7d-4827c3631944\") " pod="openshift-must-gather-vt288/perf-node-gather-daemonset-zlfdc" Apr 16 18:55:46.241959 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.241876 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2b9bf38-373a-42c4-bc7d-4827c3631944-lib-modules\") pod \"perf-node-gather-daemonset-zlfdc\" (UID: \"b2b9bf38-373a-42c4-bc7d-4827c3631944\") " pod="openshift-must-gather-vt288/perf-node-gather-daemonset-zlfdc" Apr 16 18:55:46.241959 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.241896 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b2b9bf38-373a-42c4-bc7d-4827c3631944-sys\") pod \"perf-node-gather-daemonset-zlfdc\" (UID: \"b2b9bf38-373a-42c4-bc7d-4827c3631944\") " pod="openshift-must-gather-vt288/perf-node-gather-daemonset-zlfdc" Apr 16 18:55:46.241959 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.241924 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/b2b9bf38-373a-42c4-bc7d-4827c3631944-proc\") pod \"perf-node-gather-daemonset-zlfdc\" (UID: \"b2b9bf38-373a-42c4-bc7d-4827c3631944\") " pod="openshift-must-gather-vt288/perf-node-gather-daemonset-zlfdc" Apr 16 18:55:46.242131 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.241979 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/b2b9bf38-373a-42c4-bc7d-4827c3631944-podres\") pod \"perf-node-gather-daemonset-zlfdc\" (UID: \"b2b9bf38-373a-42c4-bc7d-4827c3631944\") " pod="openshift-must-gather-vt288/perf-node-gather-daemonset-zlfdc" Apr 16 18:55:46.242131 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.241985 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b2b9bf38-373a-42c4-bc7d-4827c3631944-sys\") pod \"perf-node-gather-daemonset-zlfdc\" (UID: \"b2b9bf38-373a-42c4-bc7d-4827c3631944\") " pod="openshift-must-gather-vt288/perf-node-gather-daemonset-zlfdc" Apr 16 18:55:46.242131 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.242044 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2b9bf38-373a-42c4-bc7d-4827c3631944-lib-modules\") pod \"perf-node-gather-daemonset-zlfdc\" (UID: \"b2b9bf38-373a-42c4-bc7d-4827c3631944\") " pod="openshift-must-gather-vt288/perf-node-gather-daemonset-zlfdc" Apr 16 18:55:46.249726 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.249707 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2zc8\" (UniqueName: \"kubernetes.io/projected/b2b9bf38-373a-42c4-bc7d-4827c3631944-kube-api-access-d2zc8\") pod \"perf-node-gather-daemonset-zlfdc\" (UID: \"b2b9bf38-373a-42c4-bc7d-4827c3631944\") " pod="openshift-must-gather-vt288/perf-node-gather-daemonset-zlfdc" Apr 16 18:55:46.414289 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.414216 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vt288/perf-node-gather-daemonset-zlfdc" Apr 16 18:55:46.527782 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.527758 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vt288/perf-node-gather-daemonset-zlfdc"] Apr 16 18:55:46.531395 ip-10-0-128-59 kubenswrapper[2571]: W0416 18:55:46.531364 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb2b9bf38_373a_42c4_bc7d_4827c3631944.slice/crio-a071c74832400e0d126eb583b60366db67bbd4135d80f4670a9cdbfaa38feff7 WatchSource:0}: Error finding container a071c74832400e0d126eb583b60366db67bbd4135d80f4670a9cdbfaa38feff7: Status 404 returned error can't find the container with id a071c74832400e0d126eb583b60366db67bbd4135d80f4670a9cdbfaa38feff7 Apr 16 18:55:46.533684 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.533669 2571 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 18:55:46.765402 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.765329 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-7d8x4_cc0e7134-4a00-4ae4-9fdd-e97a96de72f4/dns/0.log" Apr 16 18:55:46.766275 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.766252 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vt288/perf-node-gather-daemonset-zlfdc" event={"ID":"b2b9bf38-373a-42c4-bc7d-4827c3631944","Type":"ContainerStarted","Data":"0ac032eae29ed8b00a2f9085d5a5b66889b52bbc2d0df83481240049c8b90b9f"} Apr 16 18:55:46.766368 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.766285 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vt288/perf-node-gather-daemonset-zlfdc" event={"ID":"b2b9bf38-373a-42c4-bc7d-4827c3631944","Type":"ContainerStarted","Data":"a071c74832400e0d126eb583b60366db67bbd4135d80f4670a9cdbfaa38feff7"} Apr 16 18:55:46.766368 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.766359 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-must-gather-vt288/perf-node-gather-daemonset-zlfdc" Apr 16 18:55:46.781027 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.780992 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vt288/perf-node-gather-daemonset-zlfdc" podStartSLOduration=0.78098022 podStartE2EDuration="780.98022ms" podCreationTimestamp="2026-04-16 18:55:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:55:46.780721834 +0000 UTC m=+3204.609684967" watchObservedRunningTime="2026-04-16 18:55:46.78098022 +0000 UTC m=+3204.609943352" Apr 16 18:55:46.787079 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.787062 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-7d8x4_cc0e7134-4a00-4ae4-9fdd-e97a96de72f4/kube-rbac-proxy/0.log" Apr 16 18:55:46.859606 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:46.859562 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-82kk2_433f58f3-2b64-4ade-a6d2-2016a24672b3/dns-node-resolver/0.log" Apr 16 18:55:47.410449 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:47.410419 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_image-registry-7c9486fc9-58hcf_16ee3007-8734-4922-9eda-fdb2be620e47/registry/0.log" Apr 16 18:55:47.430277 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:47.430251 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-5pm7k_443d2e7a-08b9-4fa3-b1de-3c569b5764fd/node-ca/0.log" Apr 16 18:55:48.244807 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:48.244781 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5ccd7954bb-s86dh_31f4098e-db7f-4b17-a106-9e6043d3cfe0/router/0.log" Apr 16 18:55:48.634687 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:48.634660 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-pzcv4_5e6f71fd-65bf-41c3-b270-e42236bbe730/serve-healthcheck-canary/0.log" Apr 16 18:55:49.097735 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:49.097707 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-nglgp_12c773c6-f6a4-4bdf-9a43-e89bf6b599a0/kube-rbac-proxy/0.log" Apr 16 18:55:49.126166 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:49.126145 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-nglgp_12c773c6-f6a4-4bdf-9a43-e89bf6b599a0/exporter/0.log" Apr 16 18:55:49.149953 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:49.149928 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-nglgp_12c773c6-f6a4-4bdf-9a43-e89bf6b599a0/extractor/0.log" Apr 16 18:55:51.216742 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:51.216703 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_kserve-controller-manager-659c8cbdc-bggzs_55c50b9d-6894-48c1-84f2-00aa20a103d1/manager/0.log" Apr 16 18:55:51.492685 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:51.492602 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_odh-model-controller-696fc77849-8bmzs_c40caa73-7b22-4452-9860-7a9af77112b5/manager/0.log" Apr 16 18:55:51.536969 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:51.536941 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_seaweedfs-86cc847c5c-qs9lj_da2e5378-dcc8-4b0b-b5d0-20ef11c250c3/seaweedfs/0.log" Apr 16 18:55:52.778319 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:52.778296 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-vt288/perf-node-gather-daemonset-zlfdc" Apr 16 18:55:57.266220 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:57.266192 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-8cd6l_529cdc35-2ba8-48a7-8e7c-fefbb7c00f18/kube-multus-additional-cni-plugins/0.log" Apr 16 18:55:57.286183 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:57.286163 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-8cd6l_529cdc35-2ba8-48a7-8e7c-fefbb7c00f18/egress-router-binary-copy/0.log" Apr 16 18:55:57.304716 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:57.304695 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-8cd6l_529cdc35-2ba8-48a7-8e7c-fefbb7c00f18/cni-plugins/0.log" Apr 16 18:55:57.325295 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:57.325273 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-8cd6l_529cdc35-2ba8-48a7-8e7c-fefbb7c00f18/bond-cni-plugin/0.log" Apr 16 18:55:57.344779 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:57.344760 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-8cd6l_529cdc35-2ba8-48a7-8e7c-fefbb7c00f18/routeoverride-cni/0.log" Apr 16 18:55:57.366041 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:57.366017 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-8cd6l_529cdc35-2ba8-48a7-8e7c-fefbb7c00f18/whereabouts-cni-bincopy/0.log" Apr 16 18:55:57.385482 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:57.385455 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-8cd6l_529cdc35-2ba8-48a7-8e7c-fefbb7c00f18/whereabouts-cni/0.log" Apr 16 18:55:57.762208 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:57.762179 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dztd6_dfe6a446-b0d0-4f14-ab3f-bc468e461320/kube-multus/0.log" Apr 16 18:55:57.809604 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:57.809561 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-tv8pg_28103df6-de37-4b7f-b3e8-6ef03a0d1cfe/network-metrics-daemon/0.log" Apr 16 18:55:57.827932 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:57.827905 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-tv8pg_28103df6-de37-4b7f-b3e8-6ef03a0d1cfe/kube-rbac-proxy/0.log" Apr 16 18:55:59.307403 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:59.307374 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tlgs4_bf2455ff-3f2f-4b0a-9d79-994f43be7b2f/ovn-controller/0.log" Apr 16 18:55:59.341343 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:59.341320 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tlgs4_bf2455ff-3f2f-4b0a-9d79-994f43be7b2f/ovn-acl-logging/0.log" Apr 16 18:55:59.360951 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:59.360912 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tlgs4_bf2455ff-3f2f-4b0a-9d79-994f43be7b2f/kube-rbac-proxy-node/0.log" Apr 16 18:55:59.382890 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:59.382857 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tlgs4_bf2455ff-3f2f-4b0a-9d79-994f43be7b2f/kube-rbac-proxy-ovn-metrics/0.log" Apr 16 18:55:59.403950 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:59.403909 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tlgs4_bf2455ff-3f2f-4b0a-9d79-994f43be7b2f/northd/0.log" Apr 16 18:55:59.425310 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:59.425283 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tlgs4_bf2455ff-3f2f-4b0a-9d79-994f43be7b2f/nbdb/0.log" Apr 16 18:55:59.447562 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:59.447541 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tlgs4_bf2455ff-3f2f-4b0a-9d79-994f43be7b2f/sbdb/0.log" Apr 16 18:55:59.542760 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:55:59.542732 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tlgs4_bf2455ff-3f2f-4b0a-9d79-994f43be7b2f/ovnkube-controller/0.log" Apr 16 18:56:00.556418 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:56:00.556391 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-target-9b2bn_3d6007c8-6817-406b-894d-8f5fefd81911/network-check-target-container/0.log" Apr 16 18:56:01.492749 ip-10-0-128-59 kubenswrapper[2571]: I0416 18:56:01.492676 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_iptables-alerter-jw52k_96576415-5af9-4a8f-a718-72470bf1a7d9/iptables-alerter/0.log"