Apr 16 18:30:15.901604 ip-10-0-132-14 systemd[1]: Starting Kubernetes Kubelet... Apr 16 18:30:16.375496 ip-10-0-132-14 kubenswrapper[2569]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 18:30:16.375496 ip-10-0-132-14 kubenswrapper[2569]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Apr 16 18:30:16.375496 ip-10-0-132-14 kubenswrapper[2569]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 18:30:16.375496 ip-10-0-132-14 kubenswrapper[2569]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 18:30:16.375496 ip-10-0-132-14 kubenswrapper[2569]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 18:30:16.377573 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.377384 2569 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 18:30:16.380691 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380667 2569 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 18:30:16.380691 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380686 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 18:30:16.380691 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380692 2569 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 18:30:16.380691 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380697 2569 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 18:30:16.380924 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380701 2569 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 18:30:16.380924 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380706 2569 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 18:30:16.380924 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380710 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 18:30:16.380924 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380714 2569 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 18:30:16.380924 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380718 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 18:30:16.380924 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380721 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 18:30:16.380924 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380725 2569 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 18:30:16.380924 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380729 2569 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 18:30:16.380924 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380732 2569 feature_gate.go:328] unrecognized feature gate: Example Apr 16 18:30:16.380924 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380736 2569 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 18:30:16.380924 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380740 2569 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 18:30:16.380924 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380744 2569 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 18:30:16.380924 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380748 2569 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 18:30:16.380924 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380751 2569 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 18:30:16.380924 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380755 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 18:30:16.380924 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380759 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 18:30:16.380924 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380763 2569 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 18:30:16.380924 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380767 2569 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 18:30:16.380924 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380775 2569 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 18:30:16.380924 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380779 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 18:30:16.381743 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380784 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 18:30:16.381743 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380788 2569 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 18:30:16.381743 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380793 2569 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 18:30:16.381743 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380797 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 18:30:16.381743 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380801 2569 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 18:30:16.381743 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380805 2569 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 18:30:16.381743 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380809 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 18:30:16.381743 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380814 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 18:30:16.381743 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380819 2569 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 18:30:16.381743 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380823 2569 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 18:30:16.381743 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380827 2569 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 18:30:16.381743 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380832 2569 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 18:30:16.381743 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380836 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 18:30:16.381743 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380841 2569 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 18:30:16.381743 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380846 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 18:30:16.381743 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380850 2569 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 18:30:16.381743 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380854 2569 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 18:30:16.381743 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380859 2569 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 18:30:16.381743 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380863 2569 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 18:30:16.381743 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380867 2569 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 18:30:16.382465 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380872 2569 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 18:30:16.382465 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380876 2569 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 18:30:16.382465 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380880 2569 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 18:30:16.382465 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380885 2569 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 18:30:16.382465 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380889 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 18:30:16.382465 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380894 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 18:30:16.382465 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380898 2569 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 18:30:16.382465 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380912 2569 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 18:30:16.382465 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380917 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 18:30:16.382465 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380921 2569 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 18:30:16.382465 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380936 2569 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 18:30:16.382465 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380941 2569 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 18:30:16.382465 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380945 2569 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 18:30:16.382465 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380949 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 18:30:16.382465 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380954 2569 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 18:30:16.382465 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380959 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 18:30:16.382465 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380963 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 18:30:16.382465 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380968 2569 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 18:30:16.382465 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380972 2569 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 18:30:16.382465 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380976 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 18:30:16.382976 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380980 2569 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 18:30:16.382976 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380985 2569 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 18:30:16.382976 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380988 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 18:30:16.382976 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380993 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 18:30:16.382976 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.380997 2569 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 18:30:16.382976 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381003 2569 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 18:30:16.382976 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381007 2569 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 18:30:16.382976 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381012 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 18:30:16.382976 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381016 2569 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 18:30:16.382976 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381021 2569 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 18:30:16.382976 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381027 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 18:30:16.382976 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381031 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 18:30:16.382976 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381036 2569 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 18:30:16.382976 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381041 2569 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 18:30:16.382976 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381045 2569 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 18:30:16.382976 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381050 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 18:30:16.382976 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381057 2569 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 18:30:16.382976 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381065 2569 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 18:30:16.382976 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381073 2569 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 18:30:16.383837 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381080 2569 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 18:30:16.383837 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381085 2569 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 18:30:16.383837 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381090 2569 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 18:30:16.383837 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381731 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 18:30:16.383837 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381739 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 18:30:16.383837 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381744 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 18:30:16.383837 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381749 2569 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 18:30:16.383837 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381753 2569 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 18:30:16.383837 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381757 2569 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 18:30:16.383837 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381761 2569 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 18:30:16.383837 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381765 2569 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 18:30:16.383837 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381769 2569 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 18:30:16.383837 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381774 2569 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 18:30:16.383837 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381778 2569 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 18:30:16.383837 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381782 2569 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 18:30:16.383837 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381786 2569 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 18:30:16.383837 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381791 2569 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 18:30:16.383837 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381795 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 18:30:16.383837 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381800 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 18:30:16.383837 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381806 2569 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 18:30:16.384615 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381810 2569 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 18:30:16.384615 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381814 2569 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 18:30:16.384615 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381819 2569 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 18:30:16.384615 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381824 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 18:30:16.384615 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381828 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 18:30:16.384615 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381833 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 18:30:16.384615 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381837 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 18:30:16.384615 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381841 2569 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 18:30:16.384615 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381845 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 18:30:16.384615 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381849 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 18:30:16.384615 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381853 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 18:30:16.384615 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381857 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 18:30:16.384615 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381862 2569 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 18:30:16.384615 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381867 2569 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 18:30:16.384615 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381871 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 18:30:16.384615 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381875 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 18:30:16.384615 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381879 2569 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 18:30:16.384615 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381884 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 18:30:16.384615 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381888 2569 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 18:30:16.384615 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381892 2569 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 18:30:16.385161 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381896 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 18:30:16.385161 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381900 2569 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 18:30:16.385161 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381905 2569 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 18:30:16.385161 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381909 2569 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 18:30:16.385161 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381913 2569 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 18:30:16.385161 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381917 2569 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 18:30:16.385161 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381922 2569 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 18:30:16.385161 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381926 2569 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 18:30:16.385161 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381930 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 18:30:16.385161 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381935 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 18:30:16.385161 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381940 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 18:30:16.385161 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381945 2569 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 18:30:16.385161 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381949 2569 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 18:30:16.385161 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381956 2569 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 18:30:16.385161 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381962 2569 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 18:30:16.385161 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381966 2569 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 18:30:16.385161 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381971 2569 feature_gate.go:328] unrecognized feature gate: Example Apr 16 18:30:16.385161 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381975 2569 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 18:30:16.385161 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381979 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 18:30:16.385709 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381984 2569 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 18:30:16.385709 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381988 2569 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 18:30:16.385709 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381992 2569 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 18:30:16.385709 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.381996 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 18:30:16.385709 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.382000 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 18:30:16.385709 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.382004 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 18:30:16.385709 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.382008 2569 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 18:30:16.385709 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.382012 2569 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 18:30:16.385709 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.382016 2569 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 18:30:16.385709 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.382021 2569 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 18:30:16.385709 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.382025 2569 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 18:30:16.385709 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.382029 2569 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 18:30:16.385709 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.382035 2569 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 18:30:16.385709 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.382039 2569 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 18:30:16.385709 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.382043 2569 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 18:30:16.385709 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.382048 2569 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 18:30:16.385709 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.382052 2569 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 18:30:16.385709 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.382056 2569 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 18:30:16.385709 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.382060 2569 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 18:30:16.385709 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.382065 2569 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 18:30:16.386503 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.382069 2569 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 18:30:16.386503 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.382073 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 18:30:16.386503 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.382078 2569 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 18:30:16.386503 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.382082 2569 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 18:30:16.386503 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.382086 2569 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 18:30:16.386503 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.382090 2569 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 18:30:16.386503 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.382094 2569 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 18:30:16.386503 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.382099 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 18:30:16.386503 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.382103 2569 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 18:30:16.386503 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.382109 2569 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 18:30:16.386503 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383046 2569 flags.go:64] FLAG: --address="0.0.0.0" Apr 16 18:30:16.386503 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383062 2569 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Apr 16 18:30:16.386503 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383074 2569 flags.go:64] FLAG: --anonymous-auth="true" Apr 16 18:30:16.386503 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383082 2569 flags.go:64] FLAG: --application-metrics-count-limit="100" Apr 16 18:30:16.386503 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383089 2569 flags.go:64] FLAG: --authentication-token-webhook="false" Apr 16 18:30:16.386503 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383094 2569 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Apr 16 18:30:16.386503 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383101 2569 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Apr 16 18:30:16.386503 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383108 2569 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Apr 16 18:30:16.386503 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383113 2569 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Apr 16 18:30:16.386503 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383118 2569 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Apr 16 18:30:16.386503 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383123 2569 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Apr 16 18:30:16.387081 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383128 2569 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Apr 16 18:30:16.387081 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383133 2569 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Apr 16 18:30:16.387081 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383137 2569 flags.go:64] FLAG: --cgroup-root="" Apr 16 18:30:16.387081 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383142 2569 flags.go:64] FLAG: --cgroups-per-qos="true" Apr 16 18:30:16.387081 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383146 2569 flags.go:64] FLAG: --client-ca-file="" Apr 16 18:30:16.387081 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383152 2569 flags.go:64] FLAG: --cloud-config="" Apr 16 18:30:16.387081 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383157 2569 flags.go:64] FLAG: --cloud-provider="external" Apr 16 18:30:16.387081 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383162 2569 flags.go:64] FLAG: --cluster-dns="[]" Apr 16 18:30:16.387081 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383168 2569 flags.go:64] FLAG: --cluster-domain="" Apr 16 18:30:16.387081 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383173 2569 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Apr 16 18:30:16.387081 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383178 2569 flags.go:64] FLAG: --config-dir="" Apr 16 18:30:16.387081 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383183 2569 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Apr 16 18:30:16.387081 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383189 2569 flags.go:64] FLAG: --container-log-max-files="5" Apr 16 18:30:16.387081 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383196 2569 flags.go:64] FLAG: --container-log-max-size="10Mi" Apr 16 18:30:16.387081 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383201 2569 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Apr 16 18:30:16.387081 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383206 2569 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Apr 16 18:30:16.387081 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383212 2569 flags.go:64] FLAG: --containerd-namespace="k8s.io" Apr 16 18:30:16.387081 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383216 2569 flags.go:64] FLAG: --contention-profiling="false" Apr 16 18:30:16.387081 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383221 2569 flags.go:64] FLAG: --cpu-cfs-quota="true" Apr 16 18:30:16.387081 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383226 2569 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Apr 16 18:30:16.387081 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383231 2569 flags.go:64] FLAG: --cpu-manager-policy="none" Apr 16 18:30:16.387081 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383236 2569 flags.go:64] FLAG: --cpu-manager-policy-options="" Apr 16 18:30:16.387081 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383244 2569 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Apr 16 18:30:16.387081 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383249 2569 flags.go:64] FLAG: --enable-controller-attach-detach="true" Apr 16 18:30:16.387081 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383254 2569 flags.go:64] FLAG: --enable-debugging-handlers="true" Apr 16 18:30:16.387747 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383260 2569 flags.go:64] FLAG: --enable-load-reader="false" Apr 16 18:30:16.387747 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383265 2569 flags.go:64] FLAG: --enable-server="true" Apr 16 18:30:16.387747 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383270 2569 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Apr 16 18:30:16.387747 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383278 2569 flags.go:64] FLAG: --event-burst="100" Apr 16 18:30:16.387747 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383283 2569 flags.go:64] FLAG: --event-qps="50" Apr 16 18:30:16.387747 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383287 2569 flags.go:64] FLAG: --event-storage-age-limit="default=0" Apr 16 18:30:16.387747 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383293 2569 flags.go:64] FLAG: --event-storage-event-limit="default=0" Apr 16 18:30:16.387747 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383298 2569 flags.go:64] FLAG: --eviction-hard="" Apr 16 18:30:16.387747 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383304 2569 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Apr 16 18:30:16.387747 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383309 2569 flags.go:64] FLAG: --eviction-minimum-reclaim="" Apr 16 18:30:16.387747 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383314 2569 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Apr 16 18:30:16.387747 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383319 2569 flags.go:64] FLAG: --eviction-soft="" Apr 16 18:30:16.387747 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383324 2569 flags.go:64] FLAG: --eviction-soft-grace-period="" Apr 16 18:30:16.387747 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383329 2569 flags.go:64] FLAG: --exit-on-lock-contention="false" Apr 16 18:30:16.387747 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383351 2569 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Apr 16 18:30:16.387747 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383357 2569 flags.go:64] FLAG: --experimental-mounter-path="" Apr 16 18:30:16.387747 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383361 2569 flags.go:64] FLAG: --fail-cgroupv1="false" Apr 16 18:30:16.387747 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383366 2569 flags.go:64] FLAG: --fail-swap-on="true" Apr 16 18:30:16.387747 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383371 2569 flags.go:64] FLAG: --feature-gates="" Apr 16 18:30:16.387747 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383378 2569 flags.go:64] FLAG: --file-check-frequency="20s" Apr 16 18:30:16.387747 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383383 2569 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Apr 16 18:30:16.387747 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383388 2569 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Apr 16 18:30:16.387747 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383393 2569 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Apr 16 18:30:16.387747 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383399 2569 flags.go:64] FLAG: --healthz-port="10248" Apr 16 18:30:16.387747 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383404 2569 flags.go:64] FLAG: --help="false" Apr 16 18:30:16.388408 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383409 2569 flags.go:64] FLAG: --hostname-override="ip-10-0-132-14.ec2.internal" Apr 16 18:30:16.388408 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383415 2569 flags.go:64] FLAG: --housekeeping-interval="10s" Apr 16 18:30:16.388408 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383420 2569 flags.go:64] FLAG: --http-check-frequency="20s" Apr 16 18:30:16.388408 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383425 2569 flags.go:64] FLAG: --image-credential-provider-bin-dir="/usr/libexec/kubelet-image-credential-provider-plugins" Apr 16 18:30:16.388408 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383431 2569 flags.go:64] FLAG: --image-credential-provider-config="/etc/kubernetes/credential-providers/ecr-credential-provider.yaml" Apr 16 18:30:16.388408 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383438 2569 flags.go:64] FLAG: --image-gc-high-threshold="85" Apr 16 18:30:16.388408 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383443 2569 flags.go:64] FLAG: --image-gc-low-threshold="80" Apr 16 18:30:16.388408 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383448 2569 flags.go:64] FLAG: --image-service-endpoint="" Apr 16 18:30:16.388408 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383453 2569 flags.go:64] FLAG: --kernel-memcg-notification="false" Apr 16 18:30:16.388408 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383458 2569 flags.go:64] FLAG: --kube-api-burst="100" Apr 16 18:30:16.388408 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383463 2569 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Apr 16 18:30:16.388408 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383468 2569 flags.go:64] FLAG: --kube-api-qps="50" Apr 16 18:30:16.388408 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383473 2569 flags.go:64] FLAG: --kube-reserved="" Apr 16 18:30:16.388408 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383477 2569 flags.go:64] FLAG: --kube-reserved-cgroup="" Apr 16 18:30:16.388408 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383482 2569 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Apr 16 18:30:16.388408 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383487 2569 flags.go:64] FLAG: --kubelet-cgroups="" Apr 16 18:30:16.388408 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383492 2569 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Apr 16 18:30:16.388408 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383497 2569 flags.go:64] FLAG: --lock-file="" Apr 16 18:30:16.388408 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383502 2569 flags.go:64] FLAG: --log-cadvisor-usage="false" Apr 16 18:30:16.388408 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383507 2569 flags.go:64] FLAG: --log-flush-frequency="5s" Apr 16 18:30:16.388408 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383513 2569 flags.go:64] FLAG: --log-json-info-buffer-size="0" Apr 16 18:30:16.388408 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383522 2569 flags.go:64] FLAG: --log-json-split-stream="false" Apr 16 18:30:16.388408 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383528 2569 flags.go:64] FLAG: --log-text-info-buffer-size="0" Apr 16 18:30:16.388985 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383533 2569 flags.go:64] FLAG: --log-text-split-stream="false" Apr 16 18:30:16.388985 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383538 2569 flags.go:64] FLAG: --logging-format="text" Apr 16 18:30:16.388985 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383543 2569 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Apr 16 18:30:16.388985 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383548 2569 flags.go:64] FLAG: --make-iptables-util-chains="true" Apr 16 18:30:16.388985 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383553 2569 flags.go:64] FLAG: --manifest-url="" Apr 16 18:30:16.388985 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383558 2569 flags.go:64] FLAG: --manifest-url-header="" Apr 16 18:30:16.388985 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383566 2569 flags.go:64] FLAG: --max-housekeeping-interval="15s" Apr 16 18:30:16.388985 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383571 2569 flags.go:64] FLAG: --max-open-files="1000000" Apr 16 18:30:16.388985 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383578 2569 flags.go:64] FLAG: --max-pods="110" Apr 16 18:30:16.388985 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383583 2569 flags.go:64] FLAG: --maximum-dead-containers="-1" Apr 16 18:30:16.388985 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383588 2569 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Apr 16 18:30:16.388985 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383592 2569 flags.go:64] FLAG: --memory-manager-policy="None" Apr 16 18:30:16.388985 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383597 2569 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Apr 16 18:30:16.388985 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383602 2569 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Apr 16 18:30:16.388985 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383607 2569 flags.go:64] FLAG: --node-ip="0.0.0.0" Apr 16 18:30:16.388985 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383613 2569 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhel" Apr 16 18:30:16.388985 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383624 2569 flags.go:64] FLAG: --node-status-max-images="50" Apr 16 18:30:16.388985 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383629 2569 flags.go:64] FLAG: --node-status-update-frequency="10s" Apr 16 18:30:16.388985 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383634 2569 flags.go:64] FLAG: --oom-score-adj="-999" Apr 16 18:30:16.388985 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383639 2569 flags.go:64] FLAG: --pod-cidr="" Apr 16 18:30:16.388985 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383644 2569 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc76bab72f320de3d4105c90d73c4fb139c09e20ce0fa8dcbc0cb59920d27dec" Apr 16 18:30:16.388985 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383653 2569 flags.go:64] FLAG: --pod-manifest-path="" Apr 16 18:30:16.388985 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383657 2569 flags.go:64] FLAG: --pod-max-pids="-1" Apr 16 18:30:16.388985 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383663 2569 flags.go:64] FLAG: --pods-per-core="0" Apr 16 18:30:16.389571 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383667 2569 flags.go:64] FLAG: --port="10250" Apr 16 18:30:16.389571 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383672 2569 flags.go:64] FLAG: --protect-kernel-defaults="false" Apr 16 18:30:16.389571 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383677 2569 flags.go:64] FLAG: --provider-id="aws:///us-east-1a/i-048f6e6633d570d69" Apr 16 18:30:16.389571 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383683 2569 flags.go:64] FLAG: --qos-reserved="" Apr 16 18:30:16.389571 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383689 2569 flags.go:64] FLAG: --read-only-port="10255" Apr 16 18:30:16.389571 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383694 2569 flags.go:64] FLAG: --register-node="true" Apr 16 18:30:16.389571 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383699 2569 flags.go:64] FLAG: --register-schedulable="true" Apr 16 18:30:16.389571 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383704 2569 flags.go:64] FLAG: --register-with-taints="" Apr 16 18:30:16.389571 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383710 2569 flags.go:64] FLAG: --registry-burst="10" Apr 16 18:30:16.389571 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383714 2569 flags.go:64] FLAG: --registry-qps="5" Apr 16 18:30:16.389571 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383719 2569 flags.go:64] FLAG: --reserved-cpus="" Apr 16 18:30:16.389571 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383723 2569 flags.go:64] FLAG: --reserved-memory="" Apr 16 18:30:16.389571 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383729 2569 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Apr 16 18:30:16.389571 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383734 2569 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Apr 16 18:30:16.389571 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383740 2569 flags.go:64] FLAG: --rotate-certificates="false" Apr 16 18:30:16.389571 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383744 2569 flags.go:64] FLAG: --rotate-server-certificates="false" Apr 16 18:30:16.389571 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383749 2569 flags.go:64] FLAG: --runonce="false" Apr 16 18:30:16.389571 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383754 2569 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Apr 16 18:30:16.389571 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383759 2569 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Apr 16 18:30:16.389571 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383764 2569 flags.go:64] FLAG: --seccomp-default="false" Apr 16 18:30:16.389571 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383769 2569 flags.go:64] FLAG: --serialize-image-pulls="true" Apr 16 18:30:16.389571 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383774 2569 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Apr 16 18:30:16.389571 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383779 2569 flags.go:64] FLAG: --storage-driver-db="cadvisor" Apr 16 18:30:16.389571 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383784 2569 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Apr 16 18:30:16.389571 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383789 2569 flags.go:64] FLAG: --storage-driver-password="root" Apr 16 18:30:16.389571 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383794 2569 flags.go:64] FLAG: --storage-driver-secure="false" Apr 16 18:30:16.390187 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383798 2569 flags.go:64] FLAG: --storage-driver-table="stats" Apr 16 18:30:16.390187 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383803 2569 flags.go:64] FLAG: --storage-driver-user="root" Apr 16 18:30:16.390187 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383808 2569 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Apr 16 18:30:16.390187 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383813 2569 flags.go:64] FLAG: --sync-frequency="1m0s" Apr 16 18:30:16.390187 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383818 2569 flags.go:64] FLAG: --system-cgroups="" Apr 16 18:30:16.390187 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383823 2569 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Apr 16 18:30:16.390187 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383832 2569 flags.go:64] FLAG: --system-reserved-cgroup="" Apr 16 18:30:16.390187 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383836 2569 flags.go:64] FLAG: --tls-cert-file="" Apr 16 18:30:16.390187 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383841 2569 flags.go:64] FLAG: --tls-cipher-suites="[]" Apr 16 18:30:16.390187 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383847 2569 flags.go:64] FLAG: --tls-min-version="" Apr 16 18:30:16.390187 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383858 2569 flags.go:64] FLAG: --tls-private-key-file="" Apr 16 18:30:16.390187 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383863 2569 flags.go:64] FLAG: --topology-manager-policy="none" Apr 16 18:30:16.390187 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383868 2569 flags.go:64] FLAG: --topology-manager-policy-options="" Apr 16 18:30:16.390187 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383873 2569 flags.go:64] FLAG: --topology-manager-scope="container" Apr 16 18:30:16.390187 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383877 2569 flags.go:64] FLAG: --v="2" Apr 16 18:30:16.390187 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383884 2569 flags.go:64] FLAG: --version="false" Apr 16 18:30:16.390187 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383895 2569 flags.go:64] FLAG: --vmodule="" Apr 16 18:30:16.390187 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383902 2569 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Apr 16 18:30:16.390187 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.383907 2569 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Apr 16 18:30:16.390187 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384064 2569 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 18:30:16.390187 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384072 2569 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 18:30:16.390187 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384078 2569 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 18:30:16.390187 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384083 2569 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 18:30:16.390187 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384088 2569 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 18:30:16.390831 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384093 2569 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 18:30:16.390831 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384097 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 18:30:16.390831 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384104 2569 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 18:30:16.390831 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384110 2569 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 18:30:16.390831 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384115 2569 feature_gate.go:328] unrecognized feature gate: Example Apr 16 18:30:16.390831 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384120 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 18:30:16.390831 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384124 2569 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 18:30:16.390831 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384129 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 18:30:16.390831 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384134 2569 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 18:30:16.390831 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384138 2569 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 18:30:16.390831 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384142 2569 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 18:30:16.390831 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384147 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 18:30:16.390831 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384151 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 18:30:16.390831 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384156 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 18:30:16.390831 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384160 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 18:30:16.390831 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384165 2569 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 18:30:16.390831 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384170 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 18:30:16.390831 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384174 2569 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 18:30:16.390831 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384184 2569 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 18:30:16.391359 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384190 2569 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 18:30:16.391359 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384195 2569 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 18:30:16.391359 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384200 2569 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 18:30:16.391359 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384204 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 18:30:16.391359 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384209 2569 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 18:30:16.391359 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384213 2569 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 18:30:16.391359 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384218 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 18:30:16.391359 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384222 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 18:30:16.391359 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384227 2569 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 18:30:16.391359 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384231 2569 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 18:30:16.391359 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384236 2569 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 18:30:16.391359 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384240 2569 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 18:30:16.391359 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384245 2569 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 18:30:16.391359 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384249 2569 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 18:30:16.391359 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384253 2569 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 18:30:16.391359 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384258 2569 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 18:30:16.391359 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384262 2569 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 18:30:16.391359 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384266 2569 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 18:30:16.391359 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384270 2569 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 18:30:16.391359 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384275 2569 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 18:30:16.391359 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384279 2569 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 18:30:16.391871 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384285 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 18:30:16.391871 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384289 2569 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 18:30:16.391871 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384293 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 18:30:16.391871 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384297 2569 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 18:30:16.391871 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384302 2569 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 18:30:16.391871 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384306 2569 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 18:30:16.391871 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384310 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 18:30:16.391871 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384315 2569 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 18:30:16.391871 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384320 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 18:30:16.391871 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384324 2569 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 18:30:16.391871 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384347 2569 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 18:30:16.391871 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384352 2569 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 18:30:16.391871 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384356 2569 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 18:30:16.391871 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384360 2569 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 18:30:16.391871 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384364 2569 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 18:30:16.391871 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384368 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 18:30:16.391871 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384372 2569 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 18:30:16.391871 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384377 2569 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 18:30:16.391871 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384381 2569 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 18:30:16.392346 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384385 2569 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 18:30:16.392346 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384389 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 18:30:16.392346 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384393 2569 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 18:30:16.392346 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384397 2569 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 18:30:16.392346 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384402 2569 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 18:30:16.392346 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384406 2569 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 18:30:16.392346 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384410 2569 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 18:30:16.392346 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384414 2569 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 18:30:16.392346 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384418 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 18:30:16.392346 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384423 2569 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 18:30:16.392346 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384427 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 18:30:16.392346 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384431 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 18:30:16.392346 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384435 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 18:30:16.392346 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384440 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 18:30:16.392346 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384444 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 18:30:16.392346 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384449 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 18:30:16.392346 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384453 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 18:30:16.392346 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384457 2569 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 18:30:16.392346 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384461 2569 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 18:30:16.392346 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384465 2569 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 18:30:16.392836 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384470 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 18:30:16.392836 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.384474 2569 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 18:30:16.392836 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.385127 2569 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 16 18:30:16.392836 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.392090 2569 server.go:530] "Kubelet version" kubeletVersion="v1.33.9" Apr 16 18:30:16.392836 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.392106 2569 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 18:30:16.392836 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392154 2569 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 18:30:16.392836 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392159 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 18:30:16.392836 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392162 2569 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 18:30:16.392836 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392166 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 18:30:16.392836 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392169 2569 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 18:30:16.392836 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392171 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 18:30:16.392836 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392174 2569 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 18:30:16.392836 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392177 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 18:30:16.392836 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392179 2569 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 18:30:16.392836 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392182 2569 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 18:30:16.393202 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392185 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 18:30:16.393202 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392188 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 18:30:16.393202 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392191 2569 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 18:30:16.393202 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392193 2569 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 18:30:16.393202 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392196 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 18:30:16.393202 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392199 2569 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 18:30:16.393202 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392202 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 18:30:16.393202 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392205 2569 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 18:30:16.393202 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392207 2569 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 18:30:16.393202 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392210 2569 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 18:30:16.393202 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392212 2569 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 18:30:16.393202 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392216 2569 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 18:30:16.393202 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392219 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 18:30:16.393202 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392222 2569 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 18:30:16.393202 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392224 2569 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 18:30:16.393202 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392227 2569 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 18:30:16.393202 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392230 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 18:30:16.393202 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392232 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 18:30:16.393202 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392235 2569 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 18:30:16.393689 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392239 2569 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 18:30:16.393689 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392244 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 18:30:16.393689 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392247 2569 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 18:30:16.393689 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392251 2569 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 18:30:16.393689 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392255 2569 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 18:30:16.393689 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392259 2569 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 18:30:16.393689 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392261 2569 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 18:30:16.393689 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392264 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 18:30:16.393689 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392267 2569 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 18:30:16.393689 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392270 2569 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 18:30:16.393689 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392273 2569 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 18:30:16.393689 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392275 2569 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 18:30:16.393689 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392278 2569 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 18:30:16.393689 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392280 2569 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 18:30:16.393689 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392283 2569 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 18:30:16.393689 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392286 2569 feature_gate.go:328] unrecognized feature gate: Example Apr 16 18:30:16.393689 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392288 2569 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 18:30:16.393689 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392291 2569 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 18:30:16.393689 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392293 2569 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 18:30:16.394157 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392296 2569 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 18:30:16.394157 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392299 2569 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 18:30:16.394157 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392301 2569 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 18:30:16.394157 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392304 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 18:30:16.394157 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392307 2569 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 18:30:16.394157 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392309 2569 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 18:30:16.394157 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392312 2569 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 18:30:16.394157 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392315 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 18:30:16.394157 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392317 2569 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 18:30:16.394157 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392320 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 18:30:16.394157 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392323 2569 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 18:30:16.394157 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392325 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 18:30:16.394157 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392327 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 18:30:16.394157 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392330 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 18:30:16.394157 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392349 2569 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 18:30:16.394157 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392352 2569 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 18:30:16.394157 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392356 2569 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 18:30:16.394157 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392358 2569 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 18:30:16.394157 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392361 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 18:30:16.394157 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392364 2569 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 18:30:16.394696 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392366 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 18:30:16.394696 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392369 2569 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 18:30:16.394696 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392372 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 18:30:16.394696 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392375 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 18:30:16.394696 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392377 2569 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 18:30:16.394696 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392380 2569 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 18:30:16.394696 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392382 2569 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 18:30:16.394696 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392385 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 18:30:16.394696 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392388 2569 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 18:30:16.394696 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392390 2569 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 18:30:16.394696 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392393 2569 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 18:30:16.394696 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392395 2569 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 18:30:16.394696 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392398 2569 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 18:30:16.394696 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392400 2569 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 18:30:16.394696 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392403 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 18:30:16.394696 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392405 2569 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 18:30:16.394696 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392408 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 18:30:16.394696 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392411 2569 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 18:30:16.395130 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.392416 2569 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 16 18:30:16.395130 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392530 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 18:30:16.395130 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392535 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 18:30:16.395130 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392538 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 18:30:16.395130 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392540 2569 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 18:30:16.395130 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392544 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 18:30:16.395130 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392546 2569 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 18:30:16.395130 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392549 2569 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 18:30:16.395130 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392551 2569 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 18:30:16.395130 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392554 2569 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 18:30:16.395130 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392557 2569 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 18:30:16.395130 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392560 2569 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 18:30:16.395130 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392564 2569 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 18:30:16.395130 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392568 2569 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 18:30:16.395527 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392571 2569 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 18:30:16.395527 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392574 2569 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 18:30:16.395527 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392576 2569 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 18:30:16.395527 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392579 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 18:30:16.395527 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392582 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 18:30:16.395527 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392584 2569 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 18:30:16.395527 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392587 2569 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 18:30:16.395527 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392589 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 18:30:16.395527 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392592 2569 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 18:30:16.395527 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392594 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 18:30:16.395527 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392597 2569 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 18:30:16.395527 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392599 2569 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 18:30:16.395527 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392602 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 18:30:16.395527 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392604 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 18:30:16.395527 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392607 2569 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 18:30:16.395527 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392609 2569 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 18:30:16.395527 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392612 2569 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 18:30:16.395527 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392615 2569 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 18:30:16.395527 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392617 2569 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 18:30:16.395527 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392620 2569 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 18:30:16.396001 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392623 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 18:30:16.396001 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392625 2569 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 18:30:16.396001 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392628 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 18:30:16.396001 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392631 2569 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 18:30:16.396001 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392633 2569 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 18:30:16.396001 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392635 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 18:30:16.396001 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392638 2569 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 18:30:16.396001 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392641 2569 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 18:30:16.396001 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392644 2569 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 18:30:16.396001 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392646 2569 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 18:30:16.396001 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392649 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 18:30:16.396001 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392651 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 18:30:16.396001 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392654 2569 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 18:30:16.396001 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392656 2569 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 18:30:16.396001 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392659 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 18:30:16.396001 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392661 2569 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 18:30:16.396001 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392664 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 18:30:16.396001 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392667 2569 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 18:30:16.396001 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392669 2569 feature_gate.go:328] unrecognized feature gate: Example Apr 16 18:30:16.396001 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392672 2569 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 18:30:16.396506 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392674 2569 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 18:30:16.396506 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392677 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 18:30:16.396506 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392679 2569 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 18:30:16.396506 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392682 2569 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 18:30:16.396506 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392685 2569 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 18:30:16.396506 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392687 2569 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 18:30:16.396506 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392690 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 18:30:16.396506 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392692 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 18:30:16.396506 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392695 2569 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 18:30:16.396506 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392697 2569 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 18:30:16.396506 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392700 2569 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 18:30:16.396506 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392702 2569 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 18:30:16.396506 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392705 2569 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 18:30:16.396506 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392708 2569 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 18:30:16.396506 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392710 2569 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 18:30:16.396506 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392713 2569 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 18:30:16.396506 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392715 2569 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 18:30:16.396506 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392718 2569 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 18:30:16.396506 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392721 2569 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 18:30:16.396506 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392724 2569 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 18:30:16.396985 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392728 2569 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 18:30:16.396985 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392731 2569 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 18:30:16.396985 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392734 2569 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 18:30:16.396985 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392737 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 18:30:16.396985 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392740 2569 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 18:30:16.396985 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392742 2569 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 18:30:16.396985 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392745 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 18:30:16.396985 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392747 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 18:30:16.396985 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392750 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 18:30:16.396985 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392752 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 18:30:16.396985 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392755 2569 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 18:30:16.396985 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392758 2569 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 18:30:16.396985 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:16.392761 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 18:30:16.396985 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.392765 2569 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 16 18:30:16.396985 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.393461 2569 server.go:962] "Client rotation is on, will bootstrap in background" Apr 16 18:30:16.397371 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.396324 2569 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Apr 16 18:30:16.397371 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.397345 2569 server.go:1019] "Starting client certificate rotation" Apr 16 18:30:16.397462 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.397442 2569 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 16 18:30:16.397491 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.397486 2569 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 16 18:30:16.427219 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.427203 2569 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 16 18:30:16.431132 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.431107 2569 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 16 18:30:16.447836 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.447820 2569 log.go:25] "Validated CRI v1 runtime API" Apr 16 18:30:16.453416 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.453399 2569 log.go:25] "Validated CRI v1 image API" Apr 16 18:30:16.455493 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.455476 2569 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 18:30:16.457872 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.457844 2569 fs.go:135] Filesystem UUIDs: map[5487483c-d984-467b-a871-1a419615b1c5:/dev/nvme0n1p3 7B77-95E7:/dev/nvme0n1p2 973fc27e-7045-4af8-b556-5f53d0d7d30f:/dev/nvme0n1p4] Apr 16 18:30:16.457872 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.457864 2569 fs.go:136] Filesystem partitions: map[/dev/nvme0n1p3:{mountpoint:/boot major:259 minor:3 fsType:ext4 blockSize:0} /dev/nvme0n1p4:{mountpoint:/var major:259 minor:4 fsType:xfs blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Apr 16 18:30:16.458840 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.458824 2569 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 16 18:30:16.463662 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.463553 2569 manager.go:217] Machine: {Timestamp:2026-04-16 18:30:16.461565915 +0000 UTC m=+0.439649352 CPUVendorID:GenuineIntel NumCores:8 NumPhysicalCores:4 NumSockets:1 CpuFrequency:3097351 MemoryCapacity:33164488704 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ec284a993e71ddb035004107e132a42a SystemUUID:ec284a99-3e71-ddb0-3500-4107e132a42a BootID:f8f812ee-782f-49d6-8db8-bba3a0a15341 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16582242304 Type:vfs Inodes:4048399 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6632898560 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/nvme0n1p4 DeviceMajor:259 DeviceMinor:4 Capacity:128243970048 Type:vfs Inodes:62651840 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6098944 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16582246400 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/nvme0n1p3 DeviceMajor:259 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[259:0:{Name:nvme0n1 Major:259 Minor:0 Size:128849018880 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:02:5b:77:09:be:7d Speed:0 Mtu:9001} {Name:ens5 MacAddress:02:5b:77:09:be:7d Speed:0 Mtu:9001} {Name:ovs-system MacAddress:32:bf:44:78:ce:39 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33164488704 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0 4] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:1 Threads:[1 5] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:2 Threads:[2 6] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:3 Threads:[3 7] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:}] Caches:[{Id:0 Size:37486592 Type:Unified Level:3}] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Apr 16 18:30:16.463662 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.463654 2569 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Apr 16 18:30:16.463766 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.463735 2569 manager.go:233] Version: {KernelVersion:5.14.0-570.104.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20260401-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Apr 16 18:30:16.464724 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.464701 2569 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 18:30:16.464898 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.464726 2569 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-10-0-132-14.ec2.internal","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 18:30:16.464974 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.464912 2569 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 18:30:16.464974 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.464927 2569 container_manager_linux.go:306] "Creating device plugin manager" Apr 16 18:30:16.464974 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.464945 2569 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 16 18:30:16.465900 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.465888 2569 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 16 18:30:16.467914 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.467902 2569 state_mem.go:36] "Initialized new in-memory state store" Apr 16 18:30:16.468049 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.468038 2569 server.go:1267] "Using root directory" path="/var/lib/kubelet" Apr 16 18:30:16.470360 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.470347 2569 kubelet.go:491] "Attempting to sync node with API server" Apr 16 18:30:16.470418 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.470376 2569 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 18:30:16.470418 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.470393 2569 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Apr 16 18:30:16.470418 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.470408 2569 kubelet.go:397] "Adding apiserver pod source" Apr 16 18:30:16.470526 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.470421 2569 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 18:30:16.471763 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.471750 2569 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 16 18:30:16.471842 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.471776 2569 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 16 18:30:16.477432 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.477413 2569 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.10-2.rhaos4.20.gita4d0894.el9" apiVersion="v1" Apr 16 18:30:16.479634 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.479620 2569 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 18:30:16.481220 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.481208 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Apr 16 18:30:16.481274 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.481225 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Apr 16 18:30:16.481274 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.481235 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Apr 16 18:30:16.481274 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.481241 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Apr 16 18:30:16.481274 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.481247 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Apr 16 18:30:16.481274 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.481254 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Apr 16 18:30:16.481274 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.481260 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Apr 16 18:30:16.481274 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.481265 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Apr 16 18:30:16.481274 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.481272 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Apr 16 18:30:16.481536 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.481278 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Apr 16 18:30:16.481536 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.481287 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Apr 16 18:30:16.481536 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.481296 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Apr 16 18:30:16.482175 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.482163 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Apr 16 18:30:16.482175 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.482174 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Apr 16 18:30:16.485782 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.485769 2569 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 16 18:30:16.485843 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.485807 2569 server.go:1295] "Started kubelet" Apr 16 18:30:16.485931 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.485882 2569 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 18:30:16.485992 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.485930 2569 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 18:30:16.486035 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.485992 2569 server_v1.go:47] "podresources" method="list" useActivePods=true Apr 16 18:30:16.486597 ip-10-0-132-14 systemd[1]: Started Kubernetes Kubelet. Apr 16 18:30:16.487072 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.487032 2569 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-132-14.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 16 18:30:16.487208 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:16.487170 2569 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-132-14.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 18:30:16.487261 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:16.487218 2569 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 18:30:16.487309 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.487225 2569 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 18:30:16.489046 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.489030 2569 server.go:317] "Adding debug handlers to kubelet server" Apr 16 18:30:16.492490 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.492474 2569 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Apr 16 18:30:16.493049 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.493030 2569 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 18:30:16.493809 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.493787 2569 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 16 18:30:16.493809 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.493790 2569 volume_manager.go:295] "The desired_state_of_world populator starts" Apr 16 18:30:16.493949 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.493819 2569 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 16 18:30:16.493949 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.493827 2569 factory.go:55] Registering systemd factory Apr 16 18:30:16.493949 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.493892 2569 factory.go:223] Registration of the systemd container factory successfully Apr 16 18:30:16.494084 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.494040 2569 reconstruct.go:97] "Volume reconstruction finished" Apr 16 18:30:16.494084 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.494051 2569 reconciler.go:26] "Reconciler: start to sync state" Apr 16 18:30:16.494241 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.494219 2569 factory.go:153] Registering CRI-O factory Apr 16 18:30:16.494241 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.494238 2569 factory.go:223] Registration of the crio container factory successfully Apr 16 18:30:16.494370 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.494311 2569 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Apr 16 18:30:16.494370 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.494353 2569 factory.go:103] Registering Raw factory Apr 16 18:30:16.494370 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.494372 2569 manager.go:1196] Started watching for new ooms in manager Apr 16 18:30:16.494672 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:16.494649 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-132-14.ec2.internal\" not found" Apr 16 18:30:16.494881 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.494864 2569 manager.go:319] Starting recovery of all containers Apr 16 18:30:16.497637 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:16.494406 2569 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-132-14.ec2.internal.18a6e9d7e3599fec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-132-14.ec2.internal,UID:ip-10-0-132-14.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-10-0-132-14.ec2.internal,},FirstTimestamp:2026-04-16 18:30:16.485781484 +0000 UTC m=+0.463864920,LastTimestamp:2026-04-16 18:30:16.485781484 +0000 UTC m=+0.463864920,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-132-14.ec2.internal,}" Apr 16 18:30:16.498883 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:16.498773 2569 kubelet.go:1618] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Apr 16 18:30:16.499980 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:16.499951 2569 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-132-14.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Apr 16 18:30:16.500163 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:16.500140 2569 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 18:30:16.505327 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.505302 2569 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-w8gb5" Apr 16 18:30:16.509452 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.509436 2569 manager.go:324] Recovery completed Apr 16 18:30:16.513168 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.513152 2569 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-w8gb5" Apr 16 18:30:16.513498 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.513486 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 18:30:16.515746 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.515730 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-14.ec2.internal" event="NodeHasSufficientMemory" Apr 16 18:30:16.515803 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.515761 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-14.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 18:30:16.515803 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.515774 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-14.ec2.internal" event="NodeHasSufficientPID" Apr 16 18:30:16.516294 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.516280 2569 cpu_manager.go:222] "Starting CPU manager" policy="none" Apr 16 18:30:16.516294 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.516294 2569 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Apr 16 18:30:16.516424 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.516345 2569 state_mem.go:36] "Initialized new in-memory state store" Apr 16 18:30:16.518003 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:16.517937 2569 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-132-14.ec2.internal.18a6e9d7e522dad4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-132-14.ec2.internal,UID:ip-10-0-132-14.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-132-14.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-132-14.ec2.internal,},FirstTimestamp:2026-04-16 18:30:16.515746516 +0000 UTC m=+0.493829956,LastTimestamp:2026-04-16 18:30:16.515746516 +0000 UTC m=+0.493829956,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-132-14.ec2.internal,}" Apr 16 18:30:16.518491 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.518480 2569 policy_none.go:49] "None policy: Start" Apr 16 18:30:16.518534 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.518496 2569 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 16 18:30:16.518534 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.518509 2569 state_mem.go:35] "Initializing new in-memory state store" Apr 16 18:30:16.565116 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.564925 2569 manager.go:341] "Starting Device Plugin manager" Apr 16 18:30:16.565238 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:16.565137 2569 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 18:30:16.565238 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.565148 2569 server.go:85] "Starting device plugin registration server" Apr 16 18:30:16.565367 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.565358 2569 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 18:30:16.565411 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.565369 2569 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 18:30:16.565497 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.565477 2569 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Apr 16 18:30:16.565595 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.565579 2569 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Apr 16 18:30:16.565595 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.565592 2569 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 18:30:16.566077 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:16.566057 2569 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Apr 16 18:30:16.566126 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:16.566103 2569 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-132-14.ec2.internal\" not found" Apr 16 18:30:16.641870 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.641812 2569 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 16 18:30:16.643048 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.643029 2569 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 16 18:30:16.643153 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.643054 2569 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 16 18:30:16.643153 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.643070 2569 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 18:30:16.643153 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.643076 2569 kubelet.go:2451] "Starting kubelet main sync loop" Apr 16 18:30:16.643153 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:16.643108 2569 kubelet.go:2475] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Apr 16 18:30:16.645899 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.645884 2569 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 18:30:16.666081 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.666066 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 18:30:16.666875 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.666860 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-14.ec2.internal" event="NodeHasSufficientMemory" Apr 16 18:30:16.666958 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.666889 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-14.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 18:30:16.666958 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.666899 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-14.ec2.internal" event="NodeHasSufficientPID" Apr 16 18:30:16.666958 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.666920 2569 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-132-14.ec2.internal" Apr 16 18:30:16.673785 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.673762 2569 kubelet_node_status.go:81] "Successfully registered node" node="ip-10-0-132-14.ec2.internal" Apr 16 18:30:16.673879 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:16.673806 2569 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-132-14.ec2.internal\": node \"ip-10-0-132-14.ec2.internal\" not found" Apr 16 18:30:16.693850 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:16.693829 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-132-14.ec2.internal\" not found" Apr 16 18:30:16.743479 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.743453 2569 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-14.ec2.internal","kube-system/kube-apiserver-proxy-ip-10-0-132-14.ec2.internal"] Apr 16 18:30:16.743569 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.743523 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 18:30:16.744318 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.744304 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-14.ec2.internal" event="NodeHasSufficientMemory" Apr 16 18:30:16.744415 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.744331 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-14.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 18:30:16.744415 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.744361 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-14.ec2.internal" event="NodeHasSufficientPID" Apr 16 18:30:16.745608 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.745596 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 18:30:16.745779 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.745766 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-14.ec2.internal" Apr 16 18:30:16.745841 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.745793 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 18:30:16.746376 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.746356 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-14.ec2.internal" event="NodeHasSufficientMemory" Apr 16 18:30:16.746376 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.746368 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-14.ec2.internal" event="NodeHasSufficientMemory" Apr 16 18:30:16.746503 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.746385 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-14.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 18:30:16.746503 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.746387 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-14.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 18:30:16.746503 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.746396 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-14.ec2.internal" event="NodeHasSufficientPID" Apr 16 18:30:16.746636 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.746398 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-14.ec2.internal" event="NodeHasSufficientPID" Apr 16 18:30:16.747969 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.747956 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-132-14.ec2.internal" Apr 16 18:30:16.748034 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.747981 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 18:30:16.748615 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.748598 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-14.ec2.internal" event="NodeHasSufficientMemory" Apr 16 18:30:16.748685 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.748627 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-14.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 18:30:16.748685 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.748639 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-14.ec2.internal" event="NodeHasSufficientPID" Apr 16 18:30:16.769907 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:16.769887 2569 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-132-14.ec2.internal\" not found" node="ip-10-0-132-14.ec2.internal" Apr 16 18:30:16.772138 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:16.772123 2569 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-132-14.ec2.internal\" not found" node="ip-10-0-132-14.ec2.internal" Apr 16 18:30:16.794818 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:16.794798 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-132-14.ec2.internal\" not found" Apr 16 18:30:16.796328 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.796313 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5c9e91c38fa43bfd6e69aef3cdcafb41-config\") pod \"kube-apiserver-proxy-ip-10-0-132-14.ec2.internal\" (UID: \"5c9e91c38fa43bfd6e69aef3cdcafb41\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-132-14.ec2.internal" Apr 16 18:30:16.796381 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.796353 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c9756edff8da65478d24ce030daa7e12-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-132-14.ec2.internal\" (UID: \"c9756edff8da65478d24ce030daa7e12\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-14.ec2.internal" Apr 16 18:30:16.796381 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.796369 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c9756edff8da65478d24ce030daa7e12-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-132-14.ec2.internal\" (UID: \"c9756edff8da65478d24ce030daa7e12\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-14.ec2.internal" Apr 16 18:30:16.895179 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:16.895114 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-132-14.ec2.internal\" not found" Apr 16 18:30:16.897298 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.897284 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c9756edff8da65478d24ce030daa7e12-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-132-14.ec2.internal\" (UID: \"c9756edff8da65478d24ce030daa7e12\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-14.ec2.internal" Apr 16 18:30:16.897360 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.897308 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c9756edff8da65478d24ce030daa7e12-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-132-14.ec2.internal\" (UID: \"c9756edff8da65478d24ce030daa7e12\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-14.ec2.internal" Apr 16 18:30:16.897360 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.897330 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5c9e91c38fa43bfd6e69aef3cdcafb41-config\") pod \"kube-apiserver-proxy-ip-10-0-132-14.ec2.internal\" (UID: \"5c9e91c38fa43bfd6e69aef3cdcafb41\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-132-14.ec2.internal" Apr 16 18:30:16.897424 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.897386 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c9756edff8da65478d24ce030daa7e12-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-132-14.ec2.internal\" (UID: \"c9756edff8da65478d24ce030daa7e12\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-14.ec2.internal" Apr 16 18:30:16.897455 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.897446 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c9756edff8da65478d24ce030daa7e12-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-132-14.ec2.internal\" (UID: \"c9756edff8da65478d24ce030daa7e12\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-14.ec2.internal" Apr 16 18:30:16.897489 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:16.897472 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5c9e91c38fa43bfd6e69aef3cdcafb41-config\") pod \"kube-apiserver-proxy-ip-10-0-132-14.ec2.internal\" (UID: \"5c9e91c38fa43bfd6e69aef3cdcafb41\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-132-14.ec2.internal" Apr 16 18:30:16.995904 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:16.995865 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-132-14.ec2.internal\" not found" Apr 16 18:30:17.073218 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:17.073189 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-14.ec2.internal" Apr 16 18:30:17.076177 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:17.076161 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-132-14.ec2.internal" Apr 16 18:30:17.096794 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:17.096773 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-132-14.ec2.internal\" not found" Apr 16 18:30:17.197307 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:17.197236 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-132-14.ec2.internal\" not found" Apr 16 18:30:17.297835 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:17.297807 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-132-14.ec2.internal\" not found" Apr 16 18:30:17.397408 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:17.397381 2569 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Apr 16 18:30:17.397842 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:17.397530 2569 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Apr 16 18:30:17.398472 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:17.398442 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-132-14.ec2.internal\" not found" Apr 16 18:30:17.493346 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:17.493306 2569 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Apr 16 18:30:17.499217 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:17.499192 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-132-14.ec2.internal\" not found" Apr 16 18:30:17.502746 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:17.502726 2569 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 16 18:30:17.514865 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:17.514826 2569 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2028-04-15 18:25:16 +0000 UTC" deadline="2027-09-10 20:14:01.227163936 +0000 UTC" Apr 16 18:30:17.514865 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:17.514861 2569 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="12289h43m43.712308014s" Apr 16 18:30:17.527621 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:17.527603 2569 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-x8rz4" Apr 16 18:30:17.536100 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:17.536083 2569 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-x8rz4" Apr 16 18:30:17.555174 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:17.555142 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c9e91c38fa43bfd6e69aef3cdcafb41.slice/crio-cc0f4b30e317734a270aec917b514e79af22b1da06807c5e4d8e5befe7e2d364 WatchSource:0}: Error finding container cc0f4b30e317734a270aec917b514e79af22b1da06807c5e4d8e5befe7e2d364: Status 404 returned error can't find the container with id cc0f4b30e317734a270aec917b514e79af22b1da06807c5e4d8e5befe7e2d364 Apr 16 18:30:17.555405 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:17.555384 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9756edff8da65478d24ce030daa7e12.slice/crio-73f1418a01977b5cd79d03155a22aa829d2ecb7b00f0262c8c4a8d8b5b664e24 WatchSource:0}: Error finding container 73f1418a01977b5cd79d03155a22aa829d2ecb7b00f0262c8c4a8d8b5b664e24: Status 404 returned error can't find the container with id 73f1418a01977b5cd79d03155a22aa829d2ecb7b00f0262c8c4a8d8b5b664e24 Apr 16 18:30:17.561320 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:17.561305 2569 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 18:30:17.573110 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:17.573090 2569 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 18:30:17.599956 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:17.599931 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-132-14.ec2.internal\" not found" Apr 16 18:30:17.628902 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:17.628878 2569 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 18:30:17.646014 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:17.645968 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-132-14.ec2.internal" event={"ID":"5c9e91c38fa43bfd6e69aef3cdcafb41","Type":"ContainerStarted","Data":"cc0f4b30e317734a270aec917b514e79af22b1da06807c5e4d8e5befe7e2d364"} Apr 16 18:30:17.646837 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:17.646818 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-14.ec2.internal" event={"ID":"c9756edff8da65478d24ce030daa7e12","Type":"ContainerStarted","Data":"73f1418a01977b5cd79d03155a22aa829d2ecb7b00f0262c8c4a8d8b5b664e24"} Apr 16 18:30:17.700261 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:17.700230 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-132-14.ec2.internal\" not found" Apr 16 18:30:17.800789 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:17.800710 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-132-14.ec2.internal\" not found" Apr 16 18:30:17.831094 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:17.831069 2569 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 18:30:17.893735 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:17.893697 2569 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-132-14.ec2.internal" Apr 16 18:30:17.904397 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:17.904371 2569 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 16 18:30:17.905452 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:17.905432 2569 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-14.ec2.internal" Apr 16 18:30:17.915827 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:17.915810 2569 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 16 18:30:18.471753 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.471721 2569 apiserver.go:52] "Watching apiserver" Apr 16 18:30:18.476926 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.476893 2569 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Apr 16 18:30:18.477346 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.477298 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gfcxg","openshift-dns/node-resolver-8h69z","openshift-image-registry/node-ca-rfstz","openshift-multus/multus-95kg5","openshift-multus/multus-additional-cni-plugins-d2jts","openshift-multus/network-metrics-daemon-kk4tm","openshift-network-operator/iptables-alerter-h4gn9","openshift-ovn-kubernetes/ovnkube-node-s62vp","kube-system/konnectivity-agent-nx8s6","kube-system/kube-apiserver-proxy-ip-10-0-132-14.ec2.internal","openshift-cluster-node-tuning-operator/tuned-6696h","openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-14.ec2.internal","openshift-network-diagnostics/network-check-target-tfkdr"] Apr 16 18:30:18.478816 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.478796 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-h4gn9" Apr 16 18:30:18.480611 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.480581 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Apr 16 18:30:18.480859 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.480832 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Apr 16 18:30:18.480953 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.480832 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Apr 16 18:30:18.480953 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.480869 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-dockercfg-dr2x5\"" Apr 16 18:30:18.481096 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.481073 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-8h69z" Apr 16 18:30:18.481181 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.481162 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-rfstz" Apr 16 18:30:18.482645 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.482486 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.483392 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.483053 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Apr 16 18:30:18.483392 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.483111 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Apr 16 18:30:18.483392 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.483188 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-rhvdt\"" Apr 16 18:30:18.483392 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.483273 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Apr 16 18:30:18.483617 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.483496 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Apr 16 18:30:18.483617 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.483516 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Apr 16 18:30:18.483805 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.483783 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-8cksw\"" Apr 16 18:30:18.484039 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.484020 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Apr 16 18:30:18.484203 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.484168 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-d2jts" Apr 16 18:30:18.484329 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.484310 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Apr 16 18:30:18.484329 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.484321 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-j78md\"" Apr 16 18:30:18.484487 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.484322 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Apr 16 18:30:18.484769 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.484752 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Apr 16 18:30:18.485309 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.485291 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kk4tm" Apr 16 18:30:18.485410 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:18.485381 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kk4tm" podUID="dc3c5cbb-7bc5-4228-88bf-021a899d1e57" Apr 16 18:30:18.485721 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.485703 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Apr 16 18:30:18.485816 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.485774 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-8cgh9\"" Apr 16 18:30:18.486679 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.486661 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.487631 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.487321 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Apr 16 18:30:18.488778 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.488708 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-nx8s6" Apr 16 18:30:18.489566 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.489546 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Apr 16 18:30:18.489662 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.489603 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Apr 16 18:30:18.489877 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.489858 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Apr 16 18:30:18.489959 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.489931 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Apr 16 18:30:18.490011 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.489970 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Apr 16 18:30:18.490060 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.490041 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-z7h8v\"" Apr 16 18:30:18.491862 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.490447 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Apr 16 18:30:18.491862 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.490897 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kube-system\"/\"konnectivity-ca-bundle\"" Apr 16 18:30:18.491862 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.491164 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"konnectivity-agent\"" Apr 16 18:30:18.491862 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.491265 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"default-dockercfg-sscph\"" Apr 16 18:30:18.492109 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.491999 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.493783 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.493739 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"openshift-service-ca.crt\"" Apr 16 18:30:18.493783 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.493752 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"kube-root-ca.crt\"" Apr 16 18:30:18.494005 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.493986 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"tuned-dockercfg-2q659\"" Apr 16 18:30:18.494097 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.494081 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-tfkdr" Apr 16 18:30:18.494352 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:18.494318 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-tfkdr" podUID="687e1330-7999-4eea-a8c8-b11fd9d8448f" Apr 16 18:30:18.495708 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.495688 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gfcxg" Apr 16 18:30:18.497287 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.497264 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-metrics-serving-cert\"" Apr 16 18:30:18.497409 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.497297 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"kube-root-ca.crt\"" Apr 16 18:30:18.497512 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.497496 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-sa-dockercfg-mvzrq\"" Apr 16 18:30:18.497512 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.497507 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"openshift-service-ca.crt\"" Apr 16 18:30:18.505296 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.505277 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr9mp\" (UniqueName: \"kubernetes.io/projected/687e1330-7999-4eea-a8c8-b11fd9d8448f-kube-api-access-qr9mp\") pod \"network-check-target-tfkdr\" (UID: \"687e1330-7999-4eea-a8c8-b11fd9d8448f\") " pod="openshift-network-diagnostics/network-check-target-tfkdr" Apr 16 18:30:18.505407 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.505308 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-os-release\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.505407 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.505344 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/89df2e8c-e3ce-4dda-afe0-e3720c021e56-cni-binary-copy\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.505407 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.505377 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-etc-openvswitch\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.505407 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.505396 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a54cc5a5-36a6-41a7-bb25-fc1eb332a322-host-slash\") pod \"iptables-alerter-h4gn9\" (UID: \"a54cc5a5-36a6-41a7-bb25-fc1eb332a322\") " pod="openshift-network-operator/iptables-alerter-h4gn9" Apr 16 18:30:18.505575 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.505436 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-multus-conf-dir\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.505575 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.505482 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e30175fe-31a1-408c-bf6b-fcf72a498c7c-tmp\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.505575 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.505511 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lx75\" (UniqueName: \"kubernetes.io/projected/e30175fe-31a1-408c-bf6b-fcf72a498c7c-kube-api-access-7lx75\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.505575 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.505537 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c9036c3c-a41d-405f-acbf-c30968863203-os-release\") pod \"multus-additional-cni-plugins-d2jts\" (UID: \"c9036c3c-a41d-405f-acbf-c30968863203\") " pod="openshift-multus/multus-additional-cni-plugins-d2jts" Apr 16 18:30:18.505575 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.505563 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/c9036c3c-a41d-405f-acbf-c30968863203-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-d2jts\" (UID: \"c9036c3c-a41d-405f-acbf-c30968863203\") " pod="openshift-multus/multus-additional-cni-plugins-d2jts" Apr 16 18:30:18.505814 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.505584 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-host-var-lib-cni-bin\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.505814 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.505599 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-host-run-netns\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.505814 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.505613 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-node-log\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.505814 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.505646 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tdzz\" (UniqueName: \"kubernetes.io/projected/dc3c5cbb-7bc5-4228-88bf-021a899d1e57-kube-api-access-9tdzz\") pod \"network-metrics-daemon-kk4tm\" (UID: \"dc3c5cbb-7bc5-4228-88bf-021a899d1e57\") " pod="openshift-multus/network-metrics-daemon-kk4tm" Apr 16 18:30:18.505814 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.505666 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/89df2e8c-e3ce-4dda-afe0-e3720c021e56-multus-daemon-config\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.505814 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.505687 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-host-slash\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.505814 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.505720 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/e30175fe-31a1-408c-bf6b-fcf72a498c7c-etc-systemd\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.505814 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.505737 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8b1f3fed-8fbc-4087-a06e-b4bb1396ba36-hosts-file\") pod \"node-resolver-8h69z\" (UID: \"8b1f3fed-8fbc-4087-a06e-b4bb1396ba36\") " pod="openshift-dns/node-resolver-8h69z" Apr 16 18:30:18.505814 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.505752 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a4e163bd-89bf-4b55-9d51-38032e333eb1-serviceca\") pod \"node-ca-rfstz\" (UID: \"a4e163bd-89bf-4b55-9d51-38032e333eb1\") " pod="openshift-image-registry/node-ca-rfstz" Apr 16 18:30:18.505814 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.505771 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9ztl\" (UniqueName: \"kubernetes.io/projected/89df2e8c-e3ce-4dda-afe0-e3720c021e56-kube-api-access-f9ztl\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.505814 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.505806 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bb2c10db-7942-42a5-a328-06839f22865c-env-overrides\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.506224 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.505843 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-cnibin\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.506224 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.505886 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-host-var-lib-cni-multus\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.506224 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.505906 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5rzf\" (UniqueName: \"kubernetes.io/projected/a54cc5a5-36a6-41a7-bb25-fc1eb332a322-kube-api-access-x5rzf\") pod \"iptables-alerter-h4gn9\" (UID: \"a54cc5a5-36a6-41a7-bb25-fc1eb332a322\") " pod="openshift-network-operator/iptables-alerter-h4gn9" Apr 16 18:30:18.506224 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.505928 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-host-run-k8s-cni-cncf-io\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.506224 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.505947 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e30175fe-31a1-408c-bf6b-fcf72a498c7c-host\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.506224 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.505970 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8b1f3fed-8fbc-4087-a06e-b4bb1396ba36-tmp-dir\") pod \"node-resolver-8h69z\" (UID: \"8b1f3fed-8fbc-4087-a06e-b4bb1396ba36\") " pod="openshift-dns/node-resolver-8h69z" Apr 16 18:30:18.506224 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.506028 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c9036c3c-a41d-405f-acbf-c30968863203-tuning-conf-dir\") pod \"multus-additional-cni-plugins-d2jts\" (UID: \"c9036c3c-a41d-405f-acbf-c30968863203\") " pod="openshift-multus/multus-additional-cni-plugins-d2jts" Apr 16 18:30:18.506224 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.506052 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c9036c3c-a41d-405f-acbf-c30968863203-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-d2jts\" (UID: \"c9036c3c-a41d-405f-acbf-c30968863203\") " pod="openshift-multus/multus-additional-cni-plugins-d2jts" Apr 16 18:30:18.506224 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.506079 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/a54cc5a5-36a6-41a7-bb25-fc1eb332a322-iptables-alerter-script\") pod \"iptables-alerter-h4gn9\" (UID: \"a54cc5a5-36a6-41a7-bb25-fc1eb332a322\") " pod="openshift-network-operator/iptables-alerter-h4gn9" Apr 16 18:30:18.506224 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.506119 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-system-cni-dir\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.506224 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.506155 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-run-systemd\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.506224 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.506181 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e30175fe-31a1-408c-bf6b-fcf72a498c7c-sys\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.506224 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.506205 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e30175fe-31a1-408c-bf6b-fcf72a498c7c-var-lib-kubelet\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.506224 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.506225 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-hostroot\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.506762 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.506240 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-run-openvswitch\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.506762 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.506263 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-host-run-ovn-kubernetes\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.506762 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.506288 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bb2c10db-7942-42a5-a328-06839f22865c-ovn-node-metrics-cert\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.506762 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.506303 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/e30175fe-31a1-408c-bf6b-fcf72a498c7c-etc-modprobe-d\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.506762 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.506325 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-multus-socket-dir-parent\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.506762 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.506420 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fx8d\" (UniqueName: \"kubernetes.io/projected/c9036c3c-a41d-405f-acbf-c30968863203-kube-api-access-9fx8d\") pod \"multus-additional-cni-plugins-d2jts\" (UID: \"c9036c3c-a41d-405f-acbf-c30968863203\") " pod="openshift-multus/multus-additional-cni-plugins-d2jts" Apr 16 18:30:18.506762 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.506449 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/a72d0e51-8bc7-48d1-b552-f8f4b4a532f9-agent-certs\") pod \"konnectivity-agent-nx8s6\" (UID: \"a72d0e51-8bc7-48d1-b552-f8f4b4a532f9\") " pod="kube-system/konnectivity-agent-nx8s6" Apr 16 18:30:18.506762 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.506466 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-run-ovn\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.506762 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.506491 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/e30175fe-31a1-408c-bf6b-fcf72a498c7c-etc-sysctl-d\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.506762 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.506530 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e30175fe-31a1-408c-bf6b-fcf72a498c7c-run\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.506762 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.506587 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/e30175fe-31a1-408c-bf6b-fcf72a498c7c-etc-tuned\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.506762 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.506624 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dc3c5cbb-7bc5-4228-88bf-021a899d1e57-metrics-certs\") pod \"network-metrics-daemon-kk4tm\" (UID: \"dc3c5cbb-7bc5-4228-88bf-021a899d1e57\") " pod="openshift-multus/network-metrics-daemon-kk4tm" Apr 16 18:30:18.506762 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.506662 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-host-run-netns\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.506762 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.506686 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-host-kubelet\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.506762 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.506726 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt2j7\" (UniqueName: \"kubernetes.io/projected/a4e163bd-89bf-4b55-9d51-38032e333eb1-kube-api-access-qt2j7\") pod \"node-ca-rfstz\" (UID: \"a4e163bd-89bf-4b55-9d51-38032e333eb1\") " pod="openshift-image-registry/node-ca-rfstz" Apr 16 18:30:18.506762 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.506765 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c9036c3c-a41d-405f-acbf-c30968863203-system-cni-dir\") pod \"multus-additional-cni-plugins-d2jts\" (UID: \"c9036c3c-a41d-405f-acbf-c30968863203\") " pod="openshift-multus/multus-additional-cni-plugins-d2jts" Apr 16 18:30:18.507480 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.506796 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c9036c3c-a41d-405f-acbf-c30968863203-cnibin\") pod \"multus-additional-cni-plugins-d2jts\" (UID: \"c9036c3c-a41d-405f-acbf-c30968863203\") " pod="openshift-multus/multus-additional-cni-plugins-d2jts" Apr 16 18:30:18.507480 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.506819 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-host-var-lib-kubelet\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.507480 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.506840 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-host-run-multus-certs\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.507480 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.506869 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-etc-kubernetes\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.507480 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.506897 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-var-lib-openvswitch\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.507480 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.506922 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-log-socket\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.507480 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.506955 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-host-cni-netd\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.507480 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.506985 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bb2c10db-7942-42a5-a328-06839f22865c-ovnkube-script-lib\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.507480 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.507010 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgtpx\" (UniqueName: \"kubernetes.io/projected/bb2c10db-7942-42a5-a328-06839f22865c-kube-api-access-tgtpx\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.507480 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.507035 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-multus-cni-dir\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.507480 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.507057 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-systemd-units\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.507480 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.507100 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bb2c10db-7942-42a5-a328-06839f22865c-ovnkube-config\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.507480 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.507151 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e30175fe-31a1-408c-bf6b-fcf72a498c7c-etc-kubernetes\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.507480 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.507180 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/e30175fe-31a1-408c-bf6b-fcf72a498c7c-etc-sysctl-conf\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.507480 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.507203 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e30175fe-31a1-408c-bf6b-fcf72a498c7c-lib-modules\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.507480 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.507226 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4e163bd-89bf-4b55-9d51-38032e333eb1-host\") pod \"node-ca-rfstz\" (UID: \"a4e163bd-89bf-4b55-9d51-38032e333eb1\") " pod="openshift-image-registry/node-ca-rfstz" Apr 16 18:30:18.507480 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.507247 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-host-cni-bin\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.508208 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.507272 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.508208 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.507297 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/e30175fe-31a1-408c-bf6b-fcf72a498c7c-etc-sysconfig\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.508208 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.507320 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj4m8\" (UniqueName: \"kubernetes.io/projected/8b1f3fed-8fbc-4087-a06e-b4bb1396ba36-kube-api-access-fj4m8\") pod \"node-resolver-8h69z\" (UID: \"8b1f3fed-8fbc-4087-a06e-b4bb1396ba36\") " pod="openshift-dns/node-resolver-8h69z" Apr 16 18:30:18.508208 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.507357 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c9036c3c-a41d-405f-acbf-c30968863203-cni-binary-copy\") pod \"multus-additional-cni-plugins-d2jts\" (UID: \"c9036c3c-a41d-405f-acbf-c30968863203\") " pod="openshift-multus/multus-additional-cni-plugins-d2jts" Apr 16 18:30:18.508208 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.507382 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/a72d0e51-8bc7-48d1-b552-f8f4b4a532f9-konnectivity-ca\") pod \"konnectivity-agent-nx8s6\" (UID: \"a72d0e51-8bc7-48d1-b552-f8f4b4a532f9\") " pod="kube-system/konnectivity-agent-nx8s6" Apr 16 18:30:18.536856 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.536825 2569 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-15 18:25:17 +0000 UTC" deadline="2027-11-30 12:43:00.527188795 +0000 UTC" Apr 16 18:30:18.536955 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.536858 2569 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="14226h12m41.990335026s" Apr 16 18:30:18.594647 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.594614 2569 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 16 18:30:18.607637 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.607606 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dc3c5cbb-7bc5-4228-88bf-021a899d1e57-metrics-certs\") pod \"network-metrics-daemon-kk4tm\" (UID: \"dc3c5cbb-7bc5-4228-88bf-021a899d1e57\") " pod="openshift-multus/network-metrics-daemon-kk4tm" Apr 16 18:30:18.607831 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.607651 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-host-run-netns\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.607831 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.607712 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-host-run-netns\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.607831 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.607747 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-host-kubelet\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.607831 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.607775 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qt2j7\" (UniqueName: \"kubernetes.io/projected/a4e163bd-89bf-4b55-9d51-38032e333eb1-kube-api-access-qt2j7\") pod \"node-ca-rfstz\" (UID: \"a4e163bd-89bf-4b55-9d51-38032e333eb1\") " pod="openshift-image-registry/node-ca-rfstz" Apr 16 18:30:18.607831 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:18.607785 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:30:18.607831 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.607803 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c9036c3c-a41d-405f-acbf-c30968863203-system-cni-dir\") pod \"multus-additional-cni-plugins-d2jts\" (UID: \"c9036c3c-a41d-405f-acbf-c30968863203\") " pod="openshift-multus/multus-additional-cni-plugins-d2jts" Apr 16 18:30:18.607831 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.607830 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c9036c3c-a41d-405f-acbf-c30968863203-cnibin\") pod \"multus-additional-cni-plugins-d2jts\" (UID: \"c9036c3c-a41d-405f-acbf-c30968863203\") " pod="openshift-multus/multus-additional-cni-plugins-d2jts" Apr 16 18:30:18.608172 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.607835 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-host-kubelet\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.608172 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:18.607879 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc3c5cbb-7bc5-4228-88bf-021a899d1e57-metrics-certs podName:dc3c5cbb-7bc5-4228-88bf-021a899d1e57 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:19.107840095 +0000 UTC m=+3.085923520 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dc3c5cbb-7bc5-4228-88bf-021a899d1e57-metrics-certs") pod "network-metrics-daemon-kk4tm" (UID: "dc3c5cbb-7bc5-4228-88bf-021a899d1e57") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:30:18.608172 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.607888 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c9036c3c-a41d-405f-acbf-c30968863203-cnibin\") pod \"multus-additional-cni-plugins-d2jts\" (UID: \"c9036c3c-a41d-405f-acbf-c30968863203\") " pod="openshift-multus/multus-additional-cni-plugins-d2jts" Apr 16 18:30:18.608172 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.607903 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-host-var-lib-kubelet\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.608172 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.607920 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c9036c3c-a41d-405f-acbf-c30968863203-system-cni-dir\") pod \"multus-additional-cni-plugins-d2jts\" (UID: \"c9036c3c-a41d-405f-acbf-c30968863203\") " pod="openshift-multus/multus-additional-cni-plugins-d2jts" Apr 16 18:30:18.608172 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.607934 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-host-run-multus-certs\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.608172 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.607978 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-host-run-multus-certs\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.608172 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.607997 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-host-var-lib-kubelet\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.608172 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608005 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-etc-kubernetes\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.608172 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608045 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-etc-kubernetes\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.608172 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608055 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-var-lib-openvswitch\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.608172 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608095 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-var-lib-openvswitch\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.608172 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608096 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-log-socket\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.608172 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608128 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-host-cni-netd\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.608172 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608130 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-log-socket\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.608172 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608144 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bb2c10db-7942-42a5-a328-06839f22865c-ovnkube-script-lib\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.608172 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608170 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tgtpx\" (UniqueName: \"kubernetes.io/projected/bb2c10db-7942-42a5-a328-06839f22865c-kube-api-access-tgtpx\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.608779 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608177 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-host-cni-netd\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.608779 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608191 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-multus-cni-dir\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.608779 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608321 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-systemd-units\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.608779 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608358 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-multus-cni-dir\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.608779 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608378 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bb2c10db-7942-42a5-a328-06839f22865c-ovnkube-config\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.608779 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608401 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-systemd-units\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.608779 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608408 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e30175fe-31a1-408c-bf6b-fcf72a498c7c-etc-kubernetes\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.608779 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608432 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/e30175fe-31a1-408c-bf6b-fcf72a498c7c-etc-sysctl-conf\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.608779 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608447 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e30175fe-31a1-408c-bf6b-fcf72a498c7c-lib-modules\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.608779 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608465 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e30175fe-31a1-408c-bf6b-fcf72a498c7c-etc-kubernetes\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.608779 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608488 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4e163bd-89bf-4b55-9d51-38032e333eb1-host\") pod \"node-ca-rfstz\" (UID: \"a4e163bd-89bf-4b55-9d51-38032e333eb1\") " pod="openshift-image-registry/node-ca-rfstz" Apr 16 18:30:18.608779 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608516 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-host-cni-bin\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.608779 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608534 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.608779 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608550 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/e30175fe-31a1-408c-bf6b-fcf72a498c7c-etc-sysctl-conf\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.608779 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608559 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e30175fe-31a1-408c-bf6b-fcf72a498c7c-lib-modules\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.608779 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608551 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/e30175fe-31a1-408c-bf6b-fcf72a498c7c-etc-sysconfig\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.608779 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608575 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/e30175fe-31a1-408c-bf6b-fcf72a498c7c-etc-sysconfig\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.609551 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608599 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.609551 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608606 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4e163bd-89bf-4b55-9d51-38032e333eb1-host\") pod \"node-ca-rfstz\" (UID: \"a4e163bd-89bf-4b55-9d51-38032e333eb1\") " pod="openshift-image-registry/node-ca-rfstz" Apr 16 18:30:18.609551 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608628 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-host-cni-bin\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.609551 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608651 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fj4m8\" (UniqueName: \"kubernetes.io/projected/8b1f3fed-8fbc-4087-a06e-b4bb1396ba36-kube-api-access-fj4m8\") pod \"node-resolver-8h69z\" (UID: \"8b1f3fed-8fbc-4087-a06e-b4bb1396ba36\") " pod="openshift-dns/node-resolver-8h69z" Apr 16 18:30:18.609551 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608672 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c9036c3c-a41d-405f-acbf-c30968863203-cni-binary-copy\") pod \"multus-additional-cni-plugins-d2jts\" (UID: \"c9036c3c-a41d-405f-acbf-c30968863203\") " pod="openshift-multus/multus-additional-cni-plugins-d2jts" Apr 16 18:30:18.609551 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608707 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/a72d0e51-8bc7-48d1-b552-f8f4b4a532f9-konnectivity-ca\") pod \"konnectivity-agent-nx8s6\" (UID: \"a72d0e51-8bc7-48d1-b552-f8f4b4a532f9\") " pod="kube-system/konnectivity-agent-nx8s6" Apr 16 18:30:18.609551 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608740 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qr9mp\" (UniqueName: \"kubernetes.io/projected/687e1330-7999-4eea-a8c8-b11fd9d8448f-kube-api-access-qr9mp\") pod \"network-check-target-tfkdr\" (UID: \"687e1330-7999-4eea-a8c8-b11fd9d8448f\") " pod="openshift-network-diagnostics/network-check-target-tfkdr" Apr 16 18:30:18.609551 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608757 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-os-release\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.609551 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608771 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/89df2e8c-e3ce-4dda-afe0-e3720c021e56-cni-binary-copy\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.609551 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608785 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-etc-openvswitch\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.609551 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608815 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/38147eb3-233a-48ee-ac14-02eabe278c0a-socket-dir\") pod \"aws-ebs-csi-driver-node-gfcxg\" (UID: \"38147eb3-233a-48ee-ac14-02eabe278c0a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gfcxg" Apr 16 18:30:18.609551 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608835 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-os-release\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.609551 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608854 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a54cc5a5-36a6-41a7-bb25-fc1eb332a322-host-slash\") pod \"iptables-alerter-h4gn9\" (UID: \"a54cc5a5-36a6-41a7-bb25-fc1eb332a322\") " pod="openshift-network-operator/iptables-alerter-h4gn9" Apr 16 18:30:18.609551 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608881 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-multus-conf-dir\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.609551 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608905 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e30175fe-31a1-408c-bf6b-fcf72a498c7c-tmp\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.609551 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608918 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-etc-openvswitch\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.609551 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608930 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7lx75\" (UniqueName: \"kubernetes.io/projected/e30175fe-31a1-408c-bf6b-fcf72a498c7c-kube-api-access-7lx75\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.610326 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608965 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a54cc5a5-36a6-41a7-bb25-fc1eb332a322-host-slash\") pod \"iptables-alerter-h4gn9\" (UID: \"a54cc5a5-36a6-41a7-bb25-fc1eb332a322\") " pod="openshift-network-operator/iptables-alerter-h4gn9" Apr 16 18:30:18.610326 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608969 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-multus-conf-dir\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.610326 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.608992 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c9036c3c-a41d-405f-acbf-c30968863203-os-release\") pod \"multus-additional-cni-plugins-d2jts\" (UID: \"c9036c3c-a41d-405f-acbf-c30968863203\") " pod="openshift-multus/multus-additional-cni-plugins-d2jts" Apr 16 18:30:18.610326 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609025 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/c9036c3c-a41d-405f-acbf-c30968863203-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-d2jts\" (UID: \"c9036c3c-a41d-405f-acbf-c30968863203\") " pod="openshift-multus/multus-additional-cni-plugins-d2jts" Apr 16 18:30:18.610326 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609053 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-host-var-lib-cni-bin\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.610326 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609079 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-host-run-netns\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.610326 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609107 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-node-log\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.610326 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609136 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9tdzz\" (UniqueName: \"kubernetes.io/projected/dc3c5cbb-7bc5-4228-88bf-021a899d1e57-kube-api-access-9tdzz\") pod \"network-metrics-daemon-kk4tm\" (UID: \"dc3c5cbb-7bc5-4228-88bf-021a899d1e57\") " pod="openshift-multus/network-metrics-daemon-kk4tm" Apr 16 18:30:18.610326 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609162 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/89df2e8c-e3ce-4dda-afe0-e3720c021e56-multus-daemon-config\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.610326 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609187 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-host-slash\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.610326 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609196 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-host-run-netns\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.610326 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609212 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/e30175fe-31a1-408c-bf6b-fcf72a498c7c-etc-systemd\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.610326 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609237 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8b1f3fed-8fbc-4087-a06e-b4bb1396ba36-hosts-file\") pod \"node-resolver-8h69z\" (UID: \"8b1f3fed-8fbc-4087-a06e-b4bb1396ba36\") " pod="openshift-dns/node-resolver-8h69z" Apr 16 18:30:18.610326 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609241 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-host-var-lib-cni-bin\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.610326 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609274 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a4e163bd-89bf-4b55-9d51-38032e333eb1-serviceca\") pod \"node-ca-rfstz\" (UID: \"a4e163bd-89bf-4b55-9d51-38032e333eb1\") " pod="openshift-image-registry/node-ca-rfstz" Apr 16 18:30:18.610326 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609290 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-node-log\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.610326 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609286 2569 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Apr 16 18:30:18.610326 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609304 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/38147eb3-233a-48ee-ac14-02eabe278c0a-device-dir\") pod \"aws-ebs-csi-driver-node-gfcxg\" (UID: \"38147eb3-233a-48ee-ac14-02eabe278c0a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gfcxg" Apr 16 18:30:18.611150 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609370 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c9036c3c-a41d-405f-acbf-c30968863203-cni-binary-copy\") pod \"multus-additional-cni-plugins-d2jts\" (UID: \"c9036c3c-a41d-405f-acbf-c30968863203\") " pod="openshift-multus/multus-additional-cni-plugins-d2jts" Apr 16 18:30:18.611150 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609394 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/a72d0e51-8bc7-48d1-b552-f8f4b4a532f9-konnectivity-ca\") pod \"konnectivity-agent-nx8s6\" (UID: \"a72d0e51-8bc7-48d1-b552-f8f4b4a532f9\") " pod="kube-system/konnectivity-agent-nx8s6" Apr 16 18:30:18.611150 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609374 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/38147eb3-233a-48ee-ac14-02eabe278c0a-etc-selinux\") pod \"aws-ebs-csi-driver-node-gfcxg\" (UID: \"38147eb3-233a-48ee-ac14-02eabe278c0a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gfcxg" Apr 16 18:30:18.611150 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609437 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f9ztl\" (UniqueName: \"kubernetes.io/projected/89df2e8c-e3ce-4dda-afe0-e3720c021e56-kube-api-access-f9ztl\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.611150 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609439 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/89df2e8c-e3ce-4dda-afe0-e3720c021e56-cni-binary-copy\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.611150 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609459 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8b1f3fed-8fbc-4087-a06e-b4bb1396ba36-hosts-file\") pod \"node-resolver-8h69z\" (UID: \"8b1f3fed-8fbc-4087-a06e-b4bb1396ba36\") " pod="openshift-dns/node-resolver-8h69z" Apr 16 18:30:18.611150 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609463 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bb2c10db-7942-42a5-a328-06839f22865c-env-overrides\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.611150 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609470 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c9036c3c-a41d-405f-acbf-c30968863203-os-release\") pod \"multus-additional-cni-plugins-d2jts\" (UID: \"c9036c3c-a41d-405f-acbf-c30968863203\") " pod="openshift-multus/multus-additional-cni-plugins-d2jts" Apr 16 18:30:18.611150 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609496 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-cnibin\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.611150 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609522 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-host-var-lib-cni-multus\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.611150 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609523 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/c9036c3c-a41d-405f-acbf-c30968863203-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-d2jts\" (UID: \"c9036c3c-a41d-405f-acbf-c30968863203\") " pod="openshift-multus/multus-additional-cni-plugins-d2jts" Apr 16 18:30:18.611150 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609573 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-host-slash\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.611150 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609577 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-host-var-lib-cni-multus\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.611150 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609596 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/e30175fe-31a1-408c-bf6b-fcf72a498c7c-etc-systemd\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.611150 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609632 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x5rzf\" (UniqueName: \"kubernetes.io/projected/a54cc5a5-36a6-41a7-bb25-fc1eb332a322-kube-api-access-x5rzf\") pod \"iptables-alerter-h4gn9\" (UID: \"a54cc5a5-36a6-41a7-bb25-fc1eb332a322\") " pod="openshift-network-operator/iptables-alerter-h4gn9" Apr 16 18:30:18.611150 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609665 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-host-run-k8s-cni-cncf-io\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.611150 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609690 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e30175fe-31a1-408c-bf6b-fcf72a498c7c-host\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.611150 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609704 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-cnibin\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.611998 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609716 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8b1f3fed-8fbc-4087-a06e-b4bb1396ba36-tmp-dir\") pod \"node-resolver-8h69z\" (UID: \"8b1f3fed-8fbc-4087-a06e-b4bb1396ba36\") " pod="openshift-dns/node-resolver-8h69z" Apr 16 18:30:18.611998 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609743 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c9036c3c-a41d-405f-acbf-c30968863203-tuning-conf-dir\") pod \"multus-additional-cni-plugins-d2jts\" (UID: \"c9036c3c-a41d-405f-acbf-c30968863203\") " pod="openshift-multus/multus-additional-cni-plugins-d2jts" Apr 16 18:30:18.611998 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609770 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c9036c3c-a41d-405f-acbf-c30968863203-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-d2jts\" (UID: \"c9036c3c-a41d-405f-acbf-c30968863203\") " pod="openshift-multus/multus-additional-cni-plugins-d2jts" Apr 16 18:30:18.611998 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609802 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/38147eb3-233a-48ee-ac14-02eabe278c0a-registration-dir\") pod \"aws-ebs-csi-driver-node-gfcxg\" (UID: \"38147eb3-233a-48ee-ac14-02eabe278c0a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gfcxg" Apr 16 18:30:18.611998 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609829 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/38147eb3-233a-48ee-ac14-02eabe278c0a-sys-fs\") pod \"aws-ebs-csi-driver-node-gfcxg\" (UID: \"38147eb3-233a-48ee-ac14-02eabe278c0a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gfcxg" Apr 16 18:30:18.611998 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609856 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/a54cc5a5-36a6-41a7-bb25-fc1eb332a322-iptables-alerter-script\") pod \"iptables-alerter-h4gn9\" (UID: \"a54cc5a5-36a6-41a7-bb25-fc1eb332a322\") " pod="openshift-network-operator/iptables-alerter-h4gn9" Apr 16 18:30:18.611998 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609882 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-system-cni-dir\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.611998 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609906 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-run-systemd\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.611998 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609932 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e30175fe-31a1-408c-bf6b-fcf72a498c7c-sys\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.611998 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609961 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e30175fe-31a1-408c-bf6b-fcf72a498c7c-var-lib-kubelet\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.611998 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609984 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a4e163bd-89bf-4b55-9d51-38032e333eb1-serviceca\") pod \"node-ca-rfstz\" (UID: \"a4e163bd-89bf-4b55-9d51-38032e333eb1\") " pod="openshift-image-registry/node-ca-rfstz" Apr 16 18:30:18.611998 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.609995 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38147eb3-233a-48ee-ac14-02eabe278c0a-kubelet-dir\") pod \"aws-ebs-csi-driver-node-gfcxg\" (UID: \"38147eb3-233a-48ee-ac14-02eabe278c0a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gfcxg" Apr 16 18:30:18.611998 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.610030 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-hostroot\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.611998 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.610057 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-run-openvswitch\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.611998 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.610085 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-host-run-ovn-kubernetes\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.611998 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.610113 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bb2c10db-7942-42a5-a328-06839f22865c-ovn-node-metrics-cert\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.611998 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.610156 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/e30175fe-31a1-408c-bf6b-fcf72a498c7c-etc-modprobe-d\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.612816 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.610183 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-multus-socket-dir-parent\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.612816 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.610208 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9fx8d\" (UniqueName: \"kubernetes.io/projected/c9036c3c-a41d-405f-acbf-c30968863203-kube-api-access-9fx8d\") pod \"multus-additional-cni-plugins-d2jts\" (UID: \"c9036c3c-a41d-405f-acbf-c30968863203\") " pod="openshift-multus/multus-additional-cni-plugins-d2jts" Apr 16 18:30:18.612816 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.610212 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bb2c10db-7942-42a5-a328-06839f22865c-ovnkube-script-lib\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.612816 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.610259 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/a72d0e51-8bc7-48d1-b552-f8f4b4a532f9-agent-certs\") pod \"konnectivity-agent-nx8s6\" (UID: \"a72d0e51-8bc7-48d1-b552-f8f4b4a532f9\") " pod="kube-system/konnectivity-agent-nx8s6" Apr 16 18:30:18.612816 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.610275 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-run-systemd\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.612816 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.610283 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bb2c10db-7942-42a5-a328-06839f22865c-ovnkube-config\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.612816 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.610308 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bb2c10db-7942-42a5-a328-06839f22865c-env-overrides\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.612816 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.610328 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-run-ovn\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.612816 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.610351 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-host-run-ovn-kubernetes\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.612816 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.610352 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-hostroot\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.612816 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.610287 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-run-ovn\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.612816 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.610419 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c9036c3c-a41d-405f-acbf-c30968863203-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-d2jts\" (UID: \"c9036c3c-a41d-405f-acbf-c30968863203\") " pod="openshift-multus/multus-additional-cni-plugins-d2jts" Apr 16 18:30:18.612816 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.610427 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bb2c10db-7942-42a5-a328-06839f22865c-run-openvswitch\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.612816 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.610440 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e30175fe-31a1-408c-bf6b-fcf72a498c7c-sys\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.612816 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.610478 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e30175fe-31a1-408c-bf6b-fcf72a498c7c-host\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.612816 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.610480 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-multus-socket-dir-parent\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.612816 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.610500 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/e30175fe-31a1-408c-bf6b-fcf72a498c7c-etc-sysctl-d\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.612816 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.610511 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e30175fe-31a1-408c-bf6b-fcf72a498c7c-var-lib-kubelet\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.613642 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.610530 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e30175fe-31a1-408c-bf6b-fcf72a498c7c-run\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.613642 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.610556 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/e30175fe-31a1-408c-bf6b-fcf72a498c7c-etc-tuned\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.613642 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.610566 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/a54cc5a5-36a6-41a7-bb25-fc1eb332a322-iptables-alerter-script\") pod \"iptables-alerter-h4gn9\" (UID: \"a54cc5a5-36a6-41a7-bb25-fc1eb332a322\") " pod="openshift-network-operator/iptables-alerter-h4gn9" Apr 16 18:30:18.613642 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.610585 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87jnj\" (UniqueName: \"kubernetes.io/projected/38147eb3-233a-48ee-ac14-02eabe278c0a-kube-api-access-87jnj\") pod \"aws-ebs-csi-driver-node-gfcxg\" (UID: \"38147eb3-233a-48ee-ac14-02eabe278c0a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gfcxg" Apr 16 18:30:18.613642 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.610611 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/e30175fe-31a1-408c-bf6b-fcf72a498c7c-etc-modprobe-d\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.613642 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.610720 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/89df2e8c-e3ce-4dda-afe0-e3720c021e56-multus-daemon-config\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.613642 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.610727 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/e30175fe-31a1-408c-bf6b-fcf72a498c7c-etc-sysctl-d\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.613642 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.610791 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e30175fe-31a1-408c-bf6b-fcf72a498c7c-run\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.613642 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.610844 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-host-run-k8s-cni-cncf-io\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.613642 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.610895 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8b1f3fed-8fbc-4087-a06e-b4bb1396ba36-tmp-dir\") pod \"node-resolver-8h69z\" (UID: \"8b1f3fed-8fbc-4087-a06e-b4bb1396ba36\") " pod="openshift-dns/node-resolver-8h69z" Apr 16 18:30:18.613642 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.610393 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/89df2e8c-e3ce-4dda-afe0-e3720c021e56-system-cni-dir\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.613642 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.611356 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c9036c3c-a41d-405f-acbf-c30968863203-tuning-conf-dir\") pod \"multus-additional-cni-plugins-d2jts\" (UID: \"c9036c3c-a41d-405f-acbf-c30968863203\") " pod="openshift-multus/multus-additional-cni-plugins-d2jts" Apr 16 18:30:18.613642 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.613113 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e30175fe-31a1-408c-bf6b-fcf72a498c7c-tmp\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.613642 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.613148 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bb2c10db-7942-42a5-a328-06839f22865c-ovn-node-metrics-cert\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.613642 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.613522 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/e30175fe-31a1-408c-bf6b-fcf72a498c7c-etc-tuned\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.614535 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.614500 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/a72d0e51-8bc7-48d1-b552-f8f4b4a532f9-agent-certs\") pod \"konnectivity-agent-nx8s6\" (UID: \"a72d0e51-8bc7-48d1-b552-f8f4b4a532f9\") " pod="kube-system/konnectivity-agent-nx8s6" Apr 16 18:30:18.614984 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:18.614914 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 18:30:18.614984 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:18.614935 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 18:30:18.614984 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:18.614948 2569 projected.go:194] Error preparing data for projected volume kube-api-access-qr9mp for pod openshift-network-diagnostics/network-check-target-tfkdr: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:30:18.615201 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:18.615018 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/687e1330-7999-4eea-a8c8-b11fd9d8448f-kube-api-access-qr9mp podName:687e1330-7999-4eea-a8c8-b11fd9d8448f nodeName:}" failed. No retries permitted until 2026-04-16 18:30:19.115000395 +0000 UTC m=+3.093083831 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qr9mp" (UniqueName: "kubernetes.io/projected/687e1330-7999-4eea-a8c8-b11fd9d8448f-kube-api-access-qr9mp") pod "network-check-target-tfkdr" (UID: "687e1330-7999-4eea-a8c8-b11fd9d8448f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:30:18.616316 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.616255 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt2j7\" (UniqueName: \"kubernetes.io/projected/a4e163bd-89bf-4b55-9d51-38032e333eb1-kube-api-access-qt2j7\") pod \"node-ca-rfstz\" (UID: \"a4e163bd-89bf-4b55-9d51-38032e333eb1\") " pod="openshift-image-registry/node-ca-rfstz" Apr 16 18:30:18.616912 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.616847 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgtpx\" (UniqueName: \"kubernetes.io/projected/bb2c10db-7942-42a5-a328-06839f22865c-kube-api-access-tgtpx\") pod \"ovnkube-node-s62vp\" (UID: \"bb2c10db-7942-42a5-a328-06839f22865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.617640 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.617599 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9ztl\" (UniqueName: \"kubernetes.io/projected/89df2e8c-e3ce-4dda-afe0-e3720c021e56-kube-api-access-f9ztl\") pod \"multus-95kg5\" (UID: \"89df2e8c-e3ce-4dda-afe0-e3720c021e56\") " pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.618266 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.618189 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj4m8\" (UniqueName: \"kubernetes.io/projected/8b1f3fed-8fbc-4087-a06e-b4bb1396ba36-kube-api-access-fj4m8\") pod \"node-resolver-8h69z\" (UID: \"8b1f3fed-8fbc-4087-a06e-b4bb1396ba36\") " pod="openshift-dns/node-resolver-8h69z" Apr 16 18:30:18.618688 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.618672 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fx8d\" (UniqueName: \"kubernetes.io/projected/c9036c3c-a41d-405f-acbf-c30968863203-kube-api-access-9fx8d\") pod \"multus-additional-cni-plugins-d2jts\" (UID: \"c9036c3c-a41d-405f-acbf-c30968863203\") " pod="openshift-multus/multus-additional-cni-plugins-d2jts" Apr 16 18:30:18.618756 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.618694 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lx75\" (UniqueName: \"kubernetes.io/projected/e30175fe-31a1-408c-bf6b-fcf72a498c7c-kube-api-access-7lx75\") pod \"tuned-6696h\" (UID: \"e30175fe-31a1-408c-bf6b-fcf72a498c7c\") " pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.619176 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.619151 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5rzf\" (UniqueName: \"kubernetes.io/projected/a54cc5a5-36a6-41a7-bb25-fc1eb332a322-kube-api-access-x5rzf\") pod \"iptables-alerter-h4gn9\" (UID: \"a54cc5a5-36a6-41a7-bb25-fc1eb332a322\") " pod="openshift-network-operator/iptables-alerter-h4gn9" Apr 16 18:30:18.619848 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.619829 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tdzz\" (UniqueName: \"kubernetes.io/projected/dc3c5cbb-7bc5-4228-88bf-021a899d1e57-kube-api-access-9tdzz\") pod \"network-metrics-daemon-kk4tm\" (UID: \"dc3c5cbb-7bc5-4228-88bf-021a899d1e57\") " pod="openshift-multus/network-metrics-daemon-kk4tm" Apr 16 18:30:18.711253 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.711218 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/38147eb3-233a-48ee-ac14-02eabe278c0a-device-dir\") pod \"aws-ebs-csi-driver-node-gfcxg\" (UID: \"38147eb3-233a-48ee-ac14-02eabe278c0a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gfcxg" Apr 16 18:30:18.711253 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.711256 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/38147eb3-233a-48ee-ac14-02eabe278c0a-etc-selinux\") pod \"aws-ebs-csi-driver-node-gfcxg\" (UID: \"38147eb3-233a-48ee-ac14-02eabe278c0a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gfcxg" Apr 16 18:30:18.711510 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.711284 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/38147eb3-233a-48ee-ac14-02eabe278c0a-registration-dir\") pod \"aws-ebs-csi-driver-node-gfcxg\" (UID: \"38147eb3-233a-48ee-ac14-02eabe278c0a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gfcxg" Apr 16 18:30:18.711510 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.711305 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/38147eb3-233a-48ee-ac14-02eabe278c0a-sys-fs\") pod \"aws-ebs-csi-driver-node-gfcxg\" (UID: \"38147eb3-233a-48ee-ac14-02eabe278c0a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gfcxg" Apr 16 18:30:18.711510 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.711312 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/38147eb3-233a-48ee-ac14-02eabe278c0a-device-dir\") pod \"aws-ebs-csi-driver-node-gfcxg\" (UID: \"38147eb3-233a-48ee-ac14-02eabe278c0a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gfcxg" Apr 16 18:30:18.711510 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.711328 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38147eb3-233a-48ee-ac14-02eabe278c0a-kubelet-dir\") pod \"aws-ebs-csi-driver-node-gfcxg\" (UID: \"38147eb3-233a-48ee-ac14-02eabe278c0a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gfcxg" Apr 16 18:30:18.711510 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.711384 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-87jnj\" (UniqueName: \"kubernetes.io/projected/38147eb3-233a-48ee-ac14-02eabe278c0a-kube-api-access-87jnj\") pod \"aws-ebs-csi-driver-node-gfcxg\" (UID: \"38147eb3-233a-48ee-ac14-02eabe278c0a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gfcxg" Apr 16 18:30:18.711510 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.711402 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38147eb3-233a-48ee-ac14-02eabe278c0a-kubelet-dir\") pod \"aws-ebs-csi-driver-node-gfcxg\" (UID: \"38147eb3-233a-48ee-ac14-02eabe278c0a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gfcxg" Apr 16 18:30:18.711510 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.711413 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/38147eb3-233a-48ee-ac14-02eabe278c0a-etc-selinux\") pod \"aws-ebs-csi-driver-node-gfcxg\" (UID: \"38147eb3-233a-48ee-ac14-02eabe278c0a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gfcxg" Apr 16 18:30:18.711510 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.711442 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/38147eb3-233a-48ee-ac14-02eabe278c0a-registration-dir\") pod \"aws-ebs-csi-driver-node-gfcxg\" (UID: \"38147eb3-233a-48ee-ac14-02eabe278c0a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gfcxg" Apr 16 18:30:18.711510 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.711463 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/38147eb3-233a-48ee-ac14-02eabe278c0a-socket-dir\") pod \"aws-ebs-csi-driver-node-gfcxg\" (UID: \"38147eb3-233a-48ee-ac14-02eabe278c0a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gfcxg" Apr 16 18:30:18.711510 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.711418 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/38147eb3-233a-48ee-ac14-02eabe278c0a-sys-fs\") pod \"aws-ebs-csi-driver-node-gfcxg\" (UID: \"38147eb3-233a-48ee-ac14-02eabe278c0a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gfcxg" Apr 16 18:30:18.711882 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.711575 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/38147eb3-233a-48ee-ac14-02eabe278c0a-socket-dir\") pod \"aws-ebs-csi-driver-node-gfcxg\" (UID: \"38147eb3-233a-48ee-ac14-02eabe278c0a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gfcxg" Apr 16 18:30:18.719614 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.719580 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-87jnj\" (UniqueName: \"kubernetes.io/projected/38147eb3-233a-48ee-ac14-02eabe278c0a-kube-api-access-87jnj\") pod \"aws-ebs-csi-driver-node-gfcxg\" (UID: \"38147eb3-233a-48ee-ac14-02eabe278c0a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gfcxg" Apr 16 18:30:18.793528 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.793434 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-h4gn9" Apr 16 18:30:18.800378 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.800356 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-8h69z" Apr 16 18:30:18.807999 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.807977 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-rfstz" Apr 16 18:30:18.812679 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.812658 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-95kg5" Apr 16 18:30:18.818676 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.818651 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-d2jts" Apr 16 18:30:18.825331 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.825313 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:18.831892 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.831871 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-nx8s6" Apr 16 18:30:18.837447 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.837429 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-6696h" Apr 16 18:30:18.843024 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.843004 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gfcxg" Apr 16 18:30:18.908321 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:18.908289 2569 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 18:30:19.114754 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:19.114665 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dc3c5cbb-7bc5-4228-88bf-021a899d1e57-metrics-certs\") pod \"network-metrics-daemon-kk4tm\" (UID: \"dc3c5cbb-7bc5-4228-88bf-021a899d1e57\") " pod="openshift-multus/network-metrics-daemon-kk4tm" Apr 16 18:30:19.114888 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:19.114836 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:30:19.114951 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:19.114939 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc3c5cbb-7bc5-4228-88bf-021a899d1e57-metrics-certs podName:dc3c5cbb-7bc5-4228-88bf-021a899d1e57 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:20.114916766 +0000 UTC m=+4.093000211 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dc3c5cbb-7bc5-4228-88bf-021a899d1e57-metrics-certs") pod "network-metrics-daemon-kk4tm" (UID: "dc3c5cbb-7bc5-4228-88bf-021a899d1e57") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:30:19.154265 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:19.154222 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9036c3c_a41d_405f_acbf_c30968863203.slice/crio-e3e0f907a91c81c253e831c49bb9e3324eb4f5bb1d564523dab9a3f9b6e6c018 WatchSource:0}: Error finding container e3e0f907a91c81c253e831c49bb9e3324eb4f5bb1d564523dab9a3f9b6e6c018: Status 404 returned error can't find the container with id e3e0f907a91c81c253e831c49bb9e3324eb4f5bb1d564523dab9a3f9b6e6c018 Apr 16 18:30:19.155228 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:19.155202 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b1f3fed_8fbc_4087_a06e_b4bb1396ba36.slice/crio-e24f47452d659fb93450641b1dc1c963661a38a9fcddd0bf4846fa44c50d24e2 WatchSource:0}: Error finding container e24f47452d659fb93450641b1dc1c963661a38a9fcddd0bf4846fa44c50d24e2: Status 404 returned error can't find the container with id e24f47452d659fb93450641b1dc1c963661a38a9fcddd0bf4846fa44c50d24e2 Apr 16 18:30:19.157111 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:19.157093 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode30175fe_31a1_408c_bf6b_fcf72a498c7c.slice/crio-4576a0881c270e534bd6aa5e9d523a8d83bf201b1c6d2d3f217642737d2a7136 WatchSource:0}: Error finding container 4576a0881c270e534bd6aa5e9d523a8d83bf201b1c6d2d3f217642737d2a7136: Status 404 returned error can't find the container with id 4576a0881c270e534bd6aa5e9d523a8d83bf201b1c6d2d3f217642737d2a7136 Apr 16 18:30:19.159522 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:19.159500 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb2c10db_7942_42a5_a328_06839f22865c.slice/crio-c39695e88fdadc88f18fa0ea3adf81fd811251542187760afd65a429a2204896 WatchSource:0}: Error finding container c39695e88fdadc88f18fa0ea3adf81fd811251542187760afd65a429a2204896: Status 404 returned error can't find the container with id c39695e88fdadc88f18fa0ea3adf81fd811251542187760afd65a429a2204896 Apr 16 18:30:19.160239 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:19.160218 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38147eb3_233a_48ee_ac14_02eabe278c0a.slice/crio-903a9ee83a0ec7d456a13792d91f7339acb818a0f855ae7a95162eef1ed0d0d6 WatchSource:0}: Error finding container 903a9ee83a0ec7d456a13792d91f7339acb818a0f855ae7a95162eef1ed0d0d6: Status 404 returned error can't find the container with id 903a9ee83a0ec7d456a13792d91f7339acb818a0f855ae7a95162eef1ed0d0d6 Apr 16 18:30:19.161680 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:19.161569 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda72d0e51_8bc7_48d1_b552_f8f4b4a532f9.slice/crio-8c1c09aaf6ca3d09f121170bcaec35423a9d15fa971af41ab00c051c05ddc091 WatchSource:0}: Error finding container 8c1c09aaf6ca3d09f121170bcaec35423a9d15fa971af41ab00c051c05ddc091: Status 404 returned error can't find the container with id 8c1c09aaf6ca3d09f121170bcaec35423a9d15fa971af41ab00c051c05ddc091 Apr 16 18:30:19.162568 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:19.162375 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89df2e8c_e3ce_4dda_afe0_e3720c021e56.slice/crio-0b428cac920419c87c5bdc03f15f905c761466274d1fb8578666a4e1eef3fa5c WatchSource:0}: Error finding container 0b428cac920419c87c5bdc03f15f905c761466274d1fb8578666a4e1eef3fa5c: Status 404 returned error can't find the container with id 0b428cac920419c87c5bdc03f15f905c761466274d1fb8578666a4e1eef3fa5c Apr 16 18:30:19.164213 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:19.164184 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4e163bd_89bf_4b55_9d51_38032e333eb1.slice/crio-865a3c314f693a0af09722dd1baf0fafe84cc0e5707c46c42367dca29ecdd163 WatchSource:0}: Error finding container 865a3c314f693a0af09722dd1baf0fafe84cc0e5707c46c42367dca29ecdd163: Status 404 returned error can't find the container with id 865a3c314f693a0af09722dd1baf0fafe84cc0e5707c46c42367dca29ecdd163 Apr 16 18:30:19.165421 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:19.165266 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda54cc5a5_36a6_41a7_bb25_fc1eb332a322.slice/crio-373d60030040c3b5fc3df944993dbe33c16cae82ba231badaa0ce4e974ab2239 WatchSource:0}: Error finding container 373d60030040c3b5fc3df944993dbe33c16cae82ba231badaa0ce4e974ab2239: Status 404 returned error can't find the container with id 373d60030040c3b5fc3df944993dbe33c16cae82ba231badaa0ce4e974ab2239 Apr 16 18:30:19.215599 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:19.215574 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qr9mp\" (UniqueName: \"kubernetes.io/projected/687e1330-7999-4eea-a8c8-b11fd9d8448f-kube-api-access-qr9mp\") pod \"network-check-target-tfkdr\" (UID: \"687e1330-7999-4eea-a8c8-b11fd9d8448f\") " pod="openshift-network-diagnostics/network-check-target-tfkdr" Apr 16 18:30:19.215721 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:19.215701 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 18:30:19.215721 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:19.215718 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 18:30:19.215799 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:19.215731 2569 projected.go:194] Error preparing data for projected volume kube-api-access-qr9mp for pod openshift-network-diagnostics/network-check-target-tfkdr: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:30:19.215799 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:19.215789 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/687e1330-7999-4eea-a8c8-b11fd9d8448f-kube-api-access-qr9mp podName:687e1330-7999-4eea-a8c8-b11fd9d8448f nodeName:}" failed. No retries permitted until 2026-04-16 18:30:20.215771921 +0000 UTC m=+4.193855344 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-qr9mp" (UniqueName: "kubernetes.io/projected/687e1330-7999-4eea-a8c8-b11fd9d8448f-kube-api-access-qr9mp") pod "network-check-target-tfkdr" (UID: "687e1330-7999-4eea-a8c8-b11fd9d8448f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:30:19.537726 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:19.537397 2569 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-15 18:25:17 +0000 UTC" deadline="2027-11-21 08:53:58.158936893 +0000 UTC" Apr 16 18:30:19.537726 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:19.537640 2569 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="14006h23m38.621303819s" Apr 16 18:30:19.653244 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:19.653187 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-rfstz" event={"ID":"a4e163bd-89bf-4b55-9d51-38032e333eb1","Type":"ContainerStarted","Data":"865a3c314f693a0af09722dd1baf0fafe84cc0e5707c46c42367dca29ecdd163"} Apr 16 18:30:19.661368 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:19.659880 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-nx8s6" event={"ID":"a72d0e51-8bc7-48d1-b552-f8f4b4a532f9","Type":"ContainerStarted","Data":"8c1c09aaf6ca3d09f121170bcaec35423a9d15fa971af41ab00c051c05ddc091"} Apr 16 18:30:19.663364 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:19.663276 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gfcxg" event={"ID":"38147eb3-233a-48ee-ac14-02eabe278c0a","Type":"ContainerStarted","Data":"903a9ee83a0ec7d456a13792d91f7339acb818a0f855ae7a95162eef1ed0d0d6"} Apr 16 18:30:19.665222 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:19.665191 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" event={"ID":"bb2c10db-7942-42a5-a328-06839f22865c","Type":"ContainerStarted","Data":"c39695e88fdadc88f18fa0ea3adf81fd811251542187760afd65a429a2204896"} Apr 16 18:30:19.668799 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:19.668771 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-6696h" event={"ID":"e30175fe-31a1-408c-bf6b-fcf72a498c7c","Type":"ContainerStarted","Data":"4576a0881c270e534bd6aa5e9d523a8d83bf201b1c6d2d3f217642737d2a7136"} Apr 16 18:30:19.671398 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:19.671370 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-8h69z" event={"ID":"8b1f3fed-8fbc-4087-a06e-b4bb1396ba36","Type":"ContainerStarted","Data":"e24f47452d659fb93450641b1dc1c963661a38a9fcddd0bf4846fa44c50d24e2"} Apr 16 18:30:19.676772 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:19.676742 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-132-14.ec2.internal" event={"ID":"5c9e91c38fa43bfd6e69aef3cdcafb41","Type":"ContainerStarted","Data":"55cfbda54647897b233a1b721002e4c7d865b1374313ec4ad4fdbf6a9318a132"} Apr 16 18:30:19.682418 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:19.682376 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-95kg5" event={"ID":"89df2e8c-e3ce-4dda-afe0-e3720c021e56","Type":"ContainerStarted","Data":"0b428cac920419c87c5bdc03f15f905c761466274d1fb8578666a4e1eef3fa5c"} Apr 16 18:30:19.687717 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:19.687690 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d2jts" event={"ID":"c9036c3c-a41d-405f-acbf-c30968863203","Type":"ContainerStarted","Data":"e3e0f907a91c81c253e831c49bb9e3324eb4f5bb1d564523dab9a3f9b6e6c018"} Apr 16 18:30:19.689846 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:19.689748 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-proxy-ip-10-0-132-14.ec2.internal" podStartSLOduration=2.689736618 podStartE2EDuration="2.689736618s" podCreationTimestamp="2026-04-16 18:30:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:30:19.689483467 +0000 UTC m=+3.667566910" watchObservedRunningTime="2026-04-16 18:30:19.689736618 +0000 UTC m=+3.667820063" Apr 16 18:30:19.690875 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:19.690827 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-h4gn9" event={"ID":"a54cc5a5-36a6-41a7-bb25-fc1eb332a322","Type":"ContainerStarted","Data":"373d60030040c3b5fc3df944993dbe33c16cae82ba231badaa0ce4e974ab2239"} Apr 16 18:30:20.125217 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:20.125179 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dc3c5cbb-7bc5-4228-88bf-021a899d1e57-metrics-certs\") pod \"network-metrics-daemon-kk4tm\" (UID: \"dc3c5cbb-7bc5-4228-88bf-021a899d1e57\") " pod="openshift-multus/network-metrics-daemon-kk4tm" Apr 16 18:30:20.125427 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:20.125395 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:30:20.125518 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:20.125467 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc3c5cbb-7bc5-4228-88bf-021a899d1e57-metrics-certs podName:dc3c5cbb-7bc5-4228-88bf-021a899d1e57 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:22.12544788 +0000 UTC m=+6.103531326 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dc3c5cbb-7bc5-4228-88bf-021a899d1e57-metrics-certs") pod "network-metrics-daemon-kk4tm" (UID: "dc3c5cbb-7bc5-4228-88bf-021a899d1e57") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:30:20.225879 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:20.225786 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qr9mp\" (UniqueName: \"kubernetes.io/projected/687e1330-7999-4eea-a8c8-b11fd9d8448f-kube-api-access-qr9mp\") pod \"network-check-target-tfkdr\" (UID: \"687e1330-7999-4eea-a8c8-b11fd9d8448f\") " pod="openshift-network-diagnostics/network-check-target-tfkdr" Apr 16 18:30:20.226044 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:20.225959 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 18:30:20.226044 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:20.225978 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 18:30:20.226044 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:20.225990 2569 projected.go:194] Error preparing data for projected volume kube-api-access-qr9mp for pod openshift-network-diagnostics/network-check-target-tfkdr: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:30:20.226206 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:20.226048 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/687e1330-7999-4eea-a8c8-b11fd9d8448f-kube-api-access-qr9mp podName:687e1330-7999-4eea-a8c8-b11fd9d8448f nodeName:}" failed. No retries permitted until 2026-04-16 18:30:22.226030888 +0000 UTC m=+6.204114316 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qr9mp" (UniqueName: "kubernetes.io/projected/687e1330-7999-4eea-a8c8-b11fd9d8448f-kube-api-access-qr9mp") pod "network-check-target-tfkdr" (UID: "687e1330-7999-4eea-a8c8-b11fd9d8448f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:30:20.504534 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:20.504444 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kube-system/global-pull-secret-syncer-66tjb"] Apr 16 18:30:20.506552 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:20.506531 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-66tjb" Apr 16 18:30:20.506689 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:20.506607 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-66tjb" podUID="fd1a63ff-830c-4979-9f9d-bd6268584fbf" Apr 16 18:30:20.628846 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:20.628640 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/fd1a63ff-830c-4979-9f9d-bd6268584fbf-kubelet-config\") pod \"global-pull-secret-syncer-66tjb\" (UID: \"fd1a63ff-830c-4979-9f9d-bd6268584fbf\") " pod="kube-system/global-pull-secret-syncer-66tjb" Apr 16 18:30:20.628846 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:20.628712 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/fd1a63ff-830c-4979-9f9d-bd6268584fbf-dbus\") pod \"global-pull-secret-syncer-66tjb\" (UID: \"fd1a63ff-830c-4979-9f9d-bd6268584fbf\") " pod="kube-system/global-pull-secret-syncer-66tjb" Apr 16 18:30:20.628846 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:20.628768 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/fd1a63ff-830c-4979-9f9d-bd6268584fbf-original-pull-secret\") pod \"global-pull-secret-syncer-66tjb\" (UID: \"fd1a63ff-830c-4979-9f9d-bd6268584fbf\") " pod="kube-system/global-pull-secret-syncer-66tjb" Apr 16 18:30:20.646351 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:20.646307 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kk4tm" Apr 16 18:30:20.646493 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:20.646473 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kk4tm" podUID="dc3c5cbb-7bc5-4228-88bf-021a899d1e57" Apr 16 18:30:20.646549 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:20.646532 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-tfkdr" Apr 16 18:30:20.646699 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:20.646633 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-tfkdr" podUID="687e1330-7999-4eea-a8c8-b11fd9d8448f" Apr 16 18:30:20.713456 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:20.713418 2569 generic.go:358] "Generic (PLEG): container finished" podID="c9756edff8da65478d24ce030daa7e12" containerID="b90882b242f3bf49fb85f34236caf4d67a9b9cd183b089aacea45b5c4335c0b7" exitCode=0 Apr 16 18:30:20.714441 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:20.714380 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-14.ec2.internal" event={"ID":"c9756edff8da65478d24ce030daa7e12","Type":"ContainerDied","Data":"b90882b242f3bf49fb85f34236caf4d67a9b9cd183b089aacea45b5c4335c0b7"} Apr 16 18:30:20.729605 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:20.729574 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/fd1a63ff-830c-4979-9f9d-bd6268584fbf-original-pull-secret\") pod \"global-pull-secret-syncer-66tjb\" (UID: \"fd1a63ff-830c-4979-9f9d-bd6268584fbf\") " pod="kube-system/global-pull-secret-syncer-66tjb" Apr 16 18:30:20.729736 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:20.729644 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/fd1a63ff-830c-4979-9f9d-bd6268584fbf-kubelet-config\") pod \"global-pull-secret-syncer-66tjb\" (UID: \"fd1a63ff-830c-4979-9f9d-bd6268584fbf\") " pod="kube-system/global-pull-secret-syncer-66tjb" Apr 16 18:30:20.729736 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:20.729686 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/fd1a63ff-830c-4979-9f9d-bd6268584fbf-dbus\") pod \"global-pull-secret-syncer-66tjb\" (UID: \"fd1a63ff-830c-4979-9f9d-bd6268584fbf\") " pod="kube-system/global-pull-secret-syncer-66tjb" Apr 16 18:30:20.729897 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:20.729877 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/fd1a63ff-830c-4979-9f9d-bd6268584fbf-dbus\") pod \"global-pull-secret-syncer-66tjb\" (UID: \"fd1a63ff-830c-4979-9f9d-bd6268584fbf\") " pod="kube-system/global-pull-secret-syncer-66tjb" Apr 16 18:30:20.730021 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:20.730003 2569 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 18:30:20.730077 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:20.730067 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd1a63ff-830c-4979-9f9d-bd6268584fbf-original-pull-secret podName:fd1a63ff-830c-4979-9f9d-bd6268584fbf nodeName:}" failed. No retries permitted until 2026-04-16 18:30:21.230048186 +0000 UTC m=+5.208131623 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/fd1a63ff-830c-4979-9f9d-bd6268584fbf-original-pull-secret") pod "global-pull-secret-syncer-66tjb" (UID: "fd1a63ff-830c-4979-9f9d-bd6268584fbf") : object "kube-system"/"original-pull-secret" not registered Apr 16 18:30:20.730321 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:20.730302 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/fd1a63ff-830c-4979-9f9d-bd6268584fbf-kubelet-config\") pod \"global-pull-secret-syncer-66tjb\" (UID: \"fd1a63ff-830c-4979-9f9d-bd6268584fbf\") " pod="kube-system/global-pull-secret-syncer-66tjb" Apr 16 18:30:21.233374 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:21.233322 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/fd1a63ff-830c-4979-9f9d-bd6268584fbf-original-pull-secret\") pod \"global-pull-secret-syncer-66tjb\" (UID: \"fd1a63ff-830c-4979-9f9d-bd6268584fbf\") " pod="kube-system/global-pull-secret-syncer-66tjb" Apr 16 18:30:21.233538 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:21.233521 2569 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 18:30:21.233613 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:21.233585 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd1a63ff-830c-4979-9f9d-bd6268584fbf-original-pull-secret podName:fd1a63ff-830c-4979-9f9d-bd6268584fbf nodeName:}" failed. No retries permitted until 2026-04-16 18:30:22.233567402 +0000 UTC m=+6.211650850 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/fd1a63ff-830c-4979-9f9d-bd6268584fbf-original-pull-secret") pod "global-pull-secret-syncer-66tjb" (UID: "fd1a63ff-830c-4979-9f9d-bd6268584fbf") : object "kube-system"/"original-pull-secret" not registered Apr 16 18:30:21.643517 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:21.643477 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-66tjb" Apr 16 18:30:21.643945 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:21.643601 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-66tjb" podUID="fd1a63ff-830c-4979-9f9d-bd6268584fbf" Apr 16 18:30:21.719392 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:21.719290 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-14.ec2.internal" event={"ID":"c9756edff8da65478d24ce030daa7e12","Type":"ContainerStarted","Data":"4918ca96c09bdc649f722ed3410e16efd516de0eaf44b07a518b596fdda29f9a"} Apr 16 18:30:22.141953 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:22.141870 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dc3c5cbb-7bc5-4228-88bf-021a899d1e57-metrics-certs\") pod \"network-metrics-daemon-kk4tm\" (UID: \"dc3c5cbb-7bc5-4228-88bf-021a899d1e57\") " pod="openshift-multus/network-metrics-daemon-kk4tm" Apr 16 18:30:22.142105 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:22.142085 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:30:22.142236 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:22.142177 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc3c5cbb-7bc5-4228-88bf-021a899d1e57-metrics-certs podName:dc3c5cbb-7bc5-4228-88bf-021a899d1e57 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:26.142156791 +0000 UTC m=+10.120240230 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dc3c5cbb-7bc5-4228-88bf-021a899d1e57-metrics-certs") pod "network-metrics-daemon-kk4tm" (UID: "dc3c5cbb-7bc5-4228-88bf-021a899d1e57") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:30:22.242626 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:22.242573 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/fd1a63ff-830c-4979-9f9d-bd6268584fbf-original-pull-secret\") pod \"global-pull-secret-syncer-66tjb\" (UID: \"fd1a63ff-830c-4979-9f9d-bd6268584fbf\") " pod="kube-system/global-pull-secret-syncer-66tjb" Apr 16 18:30:22.242806 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:22.242698 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qr9mp\" (UniqueName: \"kubernetes.io/projected/687e1330-7999-4eea-a8c8-b11fd9d8448f-kube-api-access-qr9mp\") pod \"network-check-target-tfkdr\" (UID: \"687e1330-7999-4eea-a8c8-b11fd9d8448f\") " pod="openshift-network-diagnostics/network-check-target-tfkdr" Apr 16 18:30:22.242806 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:22.242733 2569 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 18:30:22.242910 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:22.242816 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd1a63ff-830c-4979-9f9d-bd6268584fbf-original-pull-secret podName:fd1a63ff-830c-4979-9f9d-bd6268584fbf nodeName:}" failed. No retries permitted until 2026-04-16 18:30:24.242796182 +0000 UTC m=+8.220879621 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/fd1a63ff-830c-4979-9f9d-bd6268584fbf-original-pull-secret") pod "global-pull-secret-syncer-66tjb" (UID: "fd1a63ff-830c-4979-9f9d-bd6268584fbf") : object "kube-system"/"original-pull-secret" not registered Apr 16 18:30:22.242910 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:22.242822 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 18:30:22.242910 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:22.242841 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 18:30:22.242910 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:22.242855 2569 projected.go:194] Error preparing data for projected volume kube-api-access-qr9mp for pod openshift-network-diagnostics/network-check-target-tfkdr: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:30:22.243092 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:22.242913 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/687e1330-7999-4eea-a8c8-b11fd9d8448f-kube-api-access-qr9mp podName:687e1330-7999-4eea-a8c8-b11fd9d8448f nodeName:}" failed. No retries permitted until 2026-04-16 18:30:26.242896862 +0000 UTC m=+10.220980292 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-qr9mp" (UniqueName: "kubernetes.io/projected/687e1330-7999-4eea-a8c8-b11fd9d8448f-kube-api-access-qr9mp") pod "network-check-target-tfkdr" (UID: "687e1330-7999-4eea-a8c8-b11fd9d8448f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:30:22.643510 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:22.643478 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kk4tm" Apr 16 18:30:22.643702 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:22.643623 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kk4tm" podUID="dc3c5cbb-7bc5-4228-88bf-021a899d1e57" Apr 16 18:30:22.644058 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:22.644038 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-tfkdr" Apr 16 18:30:22.644253 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:22.644228 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-tfkdr" podUID="687e1330-7999-4eea-a8c8-b11fd9d8448f" Apr 16 18:30:23.643944 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:23.643909 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-66tjb" Apr 16 18:30:23.644424 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:23.644048 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-66tjb" podUID="fd1a63ff-830c-4979-9f9d-bd6268584fbf" Apr 16 18:30:24.257828 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:24.257785 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/fd1a63ff-830c-4979-9f9d-bd6268584fbf-original-pull-secret\") pod \"global-pull-secret-syncer-66tjb\" (UID: \"fd1a63ff-830c-4979-9f9d-bd6268584fbf\") " pod="kube-system/global-pull-secret-syncer-66tjb" Apr 16 18:30:24.258009 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:24.257979 2569 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 18:30:24.258060 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:24.258046 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd1a63ff-830c-4979-9f9d-bd6268584fbf-original-pull-secret podName:fd1a63ff-830c-4979-9f9d-bd6268584fbf nodeName:}" failed. No retries permitted until 2026-04-16 18:30:28.258027559 +0000 UTC m=+12.236110988 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/fd1a63ff-830c-4979-9f9d-bd6268584fbf-original-pull-secret") pod "global-pull-secret-syncer-66tjb" (UID: "fd1a63ff-830c-4979-9f9d-bd6268584fbf") : object "kube-system"/"original-pull-secret" not registered Apr 16 18:30:24.643613 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:24.643538 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kk4tm" Apr 16 18:30:24.643754 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:24.643676 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kk4tm" podUID="dc3c5cbb-7bc5-4228-88bf-021a899d1e57" Apr 16 18:30:24.643754 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:24.643735 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-tfkdr" Apr 16 18:30:24.643892 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:24.643868 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-tfkdr" podUID="687e1330-7999-4eea-a8c8-b11fd9d8448f" Apr 16 18:30:25.644034 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:25.643993 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-66tjb" Apr 16 18:30:25.644487 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:25.644122 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-66tjb" podUID="fd1a63ff-830c-4979-9f9d-bd6268584fbf" Apr 16 18:30:26.172783 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:26.172745 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dc3c5cbb-7bc5-4228-88bf-021a899d1e57-metrics-certs\") pod \"network-metrics-daemon-kk4tm\" (UID: \"dc3c5cbb-7bc5-4228-88bf-021a899d1e57\") " pod="openshift-multus/network-metrics-daemon-kk4tm" Apr 16 18:30:26.172945 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:26.172871 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:30:26.173020 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:26.172946 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc3c5cbb-7bc5-4228-88bf-021a899d1e57-metrics-certs podName:dc3c5cbb-7bc5-4228-88bf-021a899d1e57 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:34.172926458 +0000 UTC m=+18.151009897 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dc3c5cbb-7bc5-4228-88bf-021a899d1e57-metrics-certs") pod "network-metrics-daemon-kk4tm" (UID: "dc3c5cbb-7bc5-4228-88bf-021a899d1e57") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:30:26.273541 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:26.273500 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qr9mp\" (UniqueName: \"kubernetes.io/projected/687e1330-7999-4eea-a8c8-b11fd9d8448f-kube-api-access-qr9mp\") pod \"network-check-target-tfkdr\" (UID: \"687e1330-7999-4eea-a8c8-b11fd9d8448f\") " pod="openshift-network-diagnostics/network-check-target-tfkdr" Apr 16 18:30:26.273725 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:26.273634 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 18:30:26.273725 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:26.273654 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 18:30:26.273725 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:26.273664 2569 projected.go:194] Error preparing data for projected volume kube-api-access-qr9mp for pod openshift-network-diagnostics/network-check-target-tfkdr: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:30:26.273912 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:26.273727 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/687e1330-7999-4eea-a8c8-b11fd9d8448f-kube-api-access-qr9mp podName:687e1330-7999-4eea-a8c8-b11fd9d8448f nodeName:}" failed. No retries permitted until 2026-04-16 18:30:34.273708524 +0000 UTC m=+18.251791968 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-qr9mp" (UniqueName: "kubernetes.io/projected/687e1330-7999-4eea-a8c8-b11fd9d8448f-kube-api-access-qr9mp") pod "network-check-target-tfkdr" (UID: "687e1330-7999-4eea-a8c8-b11fd9d8448f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:30:26.645567 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:26.644945 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kk4tm" Apr 16 18:30:26.645567 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:26.645067 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kk4tm" podUID="dc3c5cbb-7bc5-4228-88bf-021a899d1e57" Apr 16 18:30:26.645567 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:26.645433 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-tfkdr" Apr 16 18:30:26.645567 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:26.645520 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-tfkdr" podUID="687e1330-7999-4eea-a8c8-b11fd9d8448f" Apr 16 18:30:27.643809 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:27.643775 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-66tjb" Apr 16 18:30:27.644003 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:27.643900 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-66tjb" podUID="fd1a63ff-830c-4979-9f9d-bd6268584fbf" Apr 16 18:30:28.288832 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:28.288729 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/fd1a63ff-830c-4979-9f9d-bd6268584fbf-original-pull-secret\") pod \"global-pull-secret-syncer-66tjb\" (UID: \"fd1a63ff-830c-4979-9f9d-bd6268584fbf\") " pod="kube-system/global-pull-secret-syncer-66tjb" Apr 16 18:30:28.289269 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:28.288885 2569 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 18:30:28.289269 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:28.288958 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd1a63ff-830c-4979-9f9d-bd6268584fbf-original-pull-secret podName:fd1a63ff-830c-4979-9f9d-bd6268584fbf nodeName:}" failed. No retries permitted until 2026-04-16 18:30:36.288937109 +0000 UTC m=+20.267020547 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/fd1a63ff-830c-4979-9f9d-bd6268584fbf-original-pull-secret") pod "global-pull-secret-syncer-66tjb" (UID: "fd1a63ff-830c-4979-9f9d-bd6268584fbf") : object "kube-system"/"original-pull-secret" not registered Apr 16 18:30:28.644178 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:28.644099 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kk4tm" Apr 16 18:30:28.644327 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:28.644118 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-tfkdr" Apr 16 18:30:28.644327 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:28.644255 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kk4tm" podUID="dc3c5cbb-7bc5-4228-88bf-021a899d1e57" Apr 16 18:30:28.644327 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:28.644294 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-tfkdr" podUID="687e1330-7999-4eea-a8c8-b11fd9d8448f" Apr 16 18:30:29.643854 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:29.643821 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-66tjb" Apr 16 18:30:29.644314 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:29.643948 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-66tjb" podUID="fd1a63ff-830c-4979-9f9d-bd6268584fbf" Apr 16 18:30:30.644123 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:30.644077 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kk4tm" Apr 16 18:30:30.644572 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:30.644130 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-tfkdr" Apr 16 18:30:30.644572 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:30.644228 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kk4tm" podUID="dc3c5cbb-7bc5-4228-88bf-021a899d1e57" Apr 16 18:30:30.644572 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:30.644379 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-tfkdr" podUID="687e1330-7999-4eea-a8c8-b11fd9d8448f" Apr 16 18:30:31.643702 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:31.643675 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-66tjb" Apr 16 18:30:31.643863 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:31.643765 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-66tjb" podUID="fd1a63ff-830c-4979-9f9d-bd6268584fbf" Apr 16 18:30:32.643809 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:32.643770 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kk4tm" Apr 16 18:30:32.643809 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:32.643812 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-tfkdr" Apr 16 18:30:32.644352 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:32.643945 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kk4tm" podUID="dc3c5cbb-7bc5-4228-88bf-021a899d1e57" Apr 16 18:30:32.644352 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:32.644088 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-tfkdr" podUID="687e1330-7999-4eea-a8c8-b11fd9d8448f" Apr 16 18:30:33.644301 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:33.644265 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-66tjb" Apr 16 18:30:33.644742 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:33.644417 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-66tjb" podUID="fd1a63ff-830c-4979-9f9d-bd6268584fbf" Apr 16 18:30:34.233786 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:34.233749 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dc3c5cbb-7bc5-4228-88bf-021a899d1e57-metrics-certs\") pod \"network-metrics-daemon-kk4tm\" (UID: \"dc3c5cbb-7bc5-4228-88bf-021a899d1e57\") " pod="openshift-multus/network-metrics-daemon-kk4tm" Apr 16 18:30:34.233997 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:34.233934 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:30:34.234056 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:34.234005 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc3c5cbb-7bc5-4228-88bf-021a899d1e57-metrics-certs podName:dc3c5cbb-7bc5-4228-88bf-021a899d1e57 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:50.233986469 +0000 UTC m=+34.212069893 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dc3c5cbb-7bc5-4228-88bf-021a899d1e57-metrics-certs") pod "network-metrics-daemon-kk4tm" (UID: "dc3c5cbb-7bc5-4228-88bf-021a899d1e57") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:30:34.334852 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:34.334809 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qr9mp\" (UniqueName: \"kubernetes.io/projected/687e1330-7999-4eea-a8c8-b11fd9d8448f-kube-api-access-qr9mp\") pod \"network-check-target-tfkdr\" (UID: \"687e1330-7999-4eea-a8c8-b11fd9d8448f\") " pod="openshift-network-diagnostics/network-check-target-tfkdr" Apr 16 18:30:34.335019 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:34.334993 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 18:30:34.335019 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:34.335018 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 18:30:34.335148 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:34.335032 2569 projected.go:194] Error preparing data for projected volume kube-api-access-qr9mp for pod openshift-network-diagnostics/network-check-target-tfkdr: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:30:34.335148 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:34.335097 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/687e1330-7999-4eea-a8c8-b11fd9d8448f-kube-api-access-qr9mp podName:687e1330-7999-4eea-a8c8-b11fd9d8448f nodeName:}" failed. No retries permitted until 2026-04-16 18:30:50.335077834 +0000 UTC m=+34.313161261 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-qr9mp" (UniqueName: "kubernetes.io/projected/687e1330-7999-4eea-a8c8-b11fd9d8448f-kube-api-access-qr9mp") pod "network-check-target-tfkdr" (UID: "687e1330-7999-4eea-a8c8-b11fd9d8448f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:30:34.643832 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:34.643749 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kk4tm" Apr 16 18:30:34.643982 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:34.643762 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-tfkdr" Apr 16 18:30:34.643982 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:34.643891 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kk4tm" podUID="dc3c5cbb-7bc5-4228-88bf-021a899d1e57" Apr 16 18:30:34.643982 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:34.643961 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-tfkdr" podUID="687e1330-7999-4eea-a8c8-b11fd9d8448f" Apr 16 18:30:35.643631 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:35.643590 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-66tjb" Apr 16 18:30:35.644072 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:35.643738 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-66tjb" podUID="fd1a63ff-830c-4979-9f9d-bd6268584fbf" Apr 16 18:30:36.350833 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:36.350797 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/fd1a63ff-830c-4979-9f9d-bd6268584fbf-original-pull-secret\") pod \"global-pull-secret-syncer-66tjb\" (UID: \"fd1a63ff-830c-4979-9f9d-bd6268584fbf\") " pod="kube-system/global-pull-secret-syncer-66tjb" Apr 16 18:30:36.350990 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:36.350923 2569 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 18:30:36.350990 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:36.350981 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd1a63ff-830c-4979-9f9d-bd6268584fbf-original-pull-secret podName:fd1a63ff-830c-4979-9f9d-bd6268584fbf nodeName:}" failed. No retries permitted until 2026-04-16 18:30:52.350963687 +0000 UTC m=+36.329047115 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/fd1a63ff-830c-4979-9f9d-bd6268584fbf-original-pull-secret") pod "global-pull-secret-syncer-66tjb" (UID: "fd1a63ff-830c-4979-9f9d-bd6268584fbf") : object "kube-system"/"original-pull-secret" not registered Apr 16 18:30:36.645789 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:36.644504 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-tfkdr" Apr 16 18:30:36.645789 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:36.644859 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-tfkdr" podUID="687e1330-7999-4eea-a8c8-b11fd9d8448f" Apr 16 18:30:36.646300 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:36.646262 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kk4tm" Apr 16 18:30:36.646401 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:36.646378 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kk4tm" podUID="dc3c5cbb-7bc5-4228-88bf-021a899d1e57" Apr 16 18:30:36.745306 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:36.745126 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-8h69z" event={"ID":"8b1f3fed-8fbc-4087-a06e-b4bb1396ba36","Type":"ContainerStarted","Data":"b462df8db8181596e320282d77e51680e0a19783b49ab9b436ca0778aba97fe9"} Apr 16 18:30:36.746794 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:36.746772 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-95kg5" event={"ID":"89df2e8c-e3ce-4dda-afe0-e3720c021e56","Type":"ContainerStarted","Data":"7add9d67520bac07f2a6b7f02f776042160c5930f650927b89de4b68deb0ced7"} Apr 16 18:30:36.748267 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:36.748246 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d2jts" event={"ID":"c9036c3c-a41d-405f-acbf-c30968863203","Type":"ContainerStarted","Data":"72c74f6b4f0ab31e9471b35f75635aa65b79e5a1afa8560567b9678017e69ca2"} Apr 16 18:30:36.752982 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:36.752955 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-rfstz" event={"ID":"a4e163bd-89bf-4b55-9d51-38032e333eb1","Type":"ContainerStarted","Data":"7581467c7006e6b6c37de7c5906e28eda875f6dc33639a5ac0645123c12f4abd"} Apr 16 18:30:36.754269 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:36.754247 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-nx8s6" event={"ID":"a72d0e51-8bc7-48d1-b552-f8f4b4a532f9","Type":"ContainerStarted","Data":"b2dbb35d179786f04f3497dd09fddc7f32eb711816a22fd12e33ff3330aa0bfa"} Apr 16 18:30:36.755550 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:36.755518 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gfcxg" event={"ID":"38147eb3-233a-48ee-ac14-02eabe278c0a","Type":"ContainerStarted","Data":"cd635972d9923e31ee0925c24037b4adaa6862267cedda45875ace11fda558da"} Apr 16 18:30:36.757061 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:36.757044 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" event={"ID":"bb2c10db-7942-42a5-a328-06839f22865c","Type":"ContainerStarted","Data":"42fb0ea932114ea8213225b6f679028bf8f2185bf465c1b0b8316f32a3e5cfdf"} Apr 16 18:30:36.757112 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:36.757067 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" event={"ID":"bb2c10db-7942-42a5-a328-06839f22865c","Type":"ContainerStarted","Data":"02e012140c31d24c274a86fda6eee264e2b7c337c91da48b719fc703c38f400e"} Apr 16 18:30:36.758667 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:36.758640 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-6696h" event={"ID":"e30175fe-31a1-408c-bf6b-fcf72a498c7c","Type":"ContainerStarted","Data":"9047d366510f4b7c25d13bcb560d6a72bd02d7c8b9b3f7361f5eefdd3d940e3c"} Apr 16 18:30:36.760117 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:36.760059 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-132-14.ec2.internal" podStartSLOduration=19.760043827 podStartE2EDuration="19.760043827s" podCreationTimestamp="2026-04-16 18:30:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:30:21.732830742 +0000 UTC m=+5.710914189" watchObservedRunningTime="2026-04-16 18:30:36.760043827 +0000 UTC m=+20.738127287" Apr 16 18:30:36.760400 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:36.760363 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-8h69z" podStartSLOduration=3.561712986 podStartE2EDuration="20.760328184s" podCreationTimestamp="2026-04-16 18:30:16 +0000 UTC" firstStartedPulling="2026-04-16 18:30:19.157225507 +0000 UTC m=+3.135308934" lastFinishedPulling="2026-04-16 18:30:36.355840693 +0000 UTC m=+20.333924132" observedRunningTime="2026-04-16 18:30:36.759168542 +0000 UTC m=+20.737251987" watchObservedRunningTime="2026-04-16 18:30:36.760328184 +0000 UTC m=+20.738411629" Apr 16 18:30:36.806113 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:36.806048 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/konnectivity-agent-nx8s6" podStartSLOduration=8.323058432 podStartE2EDuration="20.806028115s" podCreationTimestamp="2026-04-16 18:30:16 +0000 UTC" firstStartedPulling="2026-04-16 18:30:19.163718686 +0000 UTC m=+3.141802108" lastFinishedPulling="2026-04-16 18:30:31.646688365 +0000 UTC m=+15.624771791" observedRunningTime="2026-04-16 18:30:36.790377106 +0000 UTC m=+20.768460646" watchObservedRunningTime="2026-04-16 18:30:36.806028115 +0000 UTC m=+20.784111562" Apr 16 18:30:36.806250 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:36.806202 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-6696h" podStartSLOduration=3.577083884 podStartE2EDuration="20.806194448s" podCreationTimestamp="2026-04-16 18:30:16 +0000 UTC" firstStartedPulling="2026-04-16 18:30:19.158872059 +0000 UTC m=+3.136955482" lastFinishedPulling="2026-04-16 18:30:36.387982619 +0000 UTC m=+20.366066046" observedRunningTime="2026-04-16 18:30:36.805246881 +0000 UTC m=+20.783330328" watchObservedRunningTime="2026-04-16 18:30:36.806194448 +0000 UTC m=+20.784277894" Apr 16 18:30:36.819830 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:36.819776 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-rfstz" podStartSLOduration=3.599653434 podStartE2EDuration="20.819757416s" podCreationTimestamp="2026-04-16 18:30:16 +0000 UTC" firstStartedPulling="2026-04-16 18:30:19.166128128 +0000 UTC m=+3.144211555" lastFinishedPulling="2026-04-16 18:30:36.3862321 +0000 UTC m=+20.364315537" observedRunningTime="2026-04-16 18:30:36.819721957 +0000 UTC m=+20.797805402" watchObservedRunningTime="2026-04-16 18:30:36.819757416 +0000 UTC m=+20.797840863" Apr 16 18:30:36.835985 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:36.835942 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-95kg5" podStartSLOduration=3.373197099 podStartE2EDuration="20.835923568s" podCreationTimestamp="2026-04-16 18:30:16 +0000 UTC" firstStartedPulling="2026-04-16 18:30:19.165052492 +0000 UTC m=+3.143135929" lastFinishedPulling="2026-04-16 18:30:36.627778962 +0000 UTC m=+20.605862398" observedRunningTime="2026-04-16 18:30:36.835780257 +0000 UTC m=+20.813863703" watchObservedRunningTime="2026-04-16 18:30:36.835923568 +0000 UTC m=+20.814007015" Apr 16 18:30:37.644153 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:37.644063 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-66tjb" Apr 16 18:30:37.644273 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:37.644188 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-66tjb" podUID="fd1a63ff-830c-4979-9f9d-bd6268584fbf" Apr 16 18:30:37.717322 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:37.717303 2569 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock" Apr 16 18:30:37.761270 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:37.761240 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-h4gn9" event={"ID":"a54cc5a5-36a6-41a7-bb25-fc1eb332a322","Type":"ContainerStarted","Data":"de2c8205bb9963836c53d407d4210ef61f4091d3ce1742d294231f81f6dbb691"} Apr 16 18:30:37.762758 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:37.762729 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gfcxg" event={"ID":"38147eb3-233a-48ee-ac14-02eabe278c0a","Type":"ContainerStarted","Data":"1530f6d9ee90b59f4aac9c3b009e9f65e84ee1c2df7ab349b887110fdf96731a"} Apr 16 18:30:37.765212 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:37.765191 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" event={"ID":"bb2c10db-7942-42a5-a328-06839f22865c","Type":"ContainerStarted","Data":"5ce1bf277e897caa605adb928b3babcd0a2d689e3fcca267b8617013731a23a3"} Apr 16 18:30:37.765301 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:37.765214 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" event={"ID":"bb2c10db-7942-42a5-a328-06839f22865c","Type":"ContainerStarted","Data":"564d6a2523229178e23b9b451b507aca50400b7ddad603e936901118dee797f6"} Apr 16 18:30:37.765301 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:37.765223 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" event={"ID":"bb2c10db-7942-42a5-a328-06839f22865c","Type":"ContainerStarted","Data":"dca5f98f5bd9ce048083436eca794e776908178899207ada2c70fb36c97ac855"} Apr 16 18:30:37.765301 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:37.765233 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" event={"ID":"bb2c10db-7942-42a5-a328-06839f22865c","Type":"ContainerStarted","Data":"d7c24fa460196d86ebda26a0aa2604b659fee19e87cbc2099f7216278b0c2d6b"} Apr 16 18:30:37.766398 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:37.766377 2569 generic.go:358] "Generic (PLEG): container finished" podID="c9036c3c-a41d-405f-acbf-c30968863203" containerID="72c74f6b4f0ab31e9471b35f75635aa65b79e5a1afa8560567b9678017e69ca2" exitCode=0 Apr 16 18:30:37.766476 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:37.766456 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d2jts" event={"ID":"c9036c3c-a41d-405f-acbf-c30968863203","Type":"ContainerDied","Data":"72c74f6b4f0ab31e9471b35f75635aa65b79e5a1afa8560567b9678017e69ca2"} Apr 16 18:30:37.774804 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:37.774768 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/iptables-alerter-h4gn9" podStartSLOduration=4.586015223 podStartE2EDuration="21.774756887s" podCreationTimestamp="2026-04-16 18:30:16 +0000 UTC" firstStartedPulling="2026-04-16 18:30:19.167090451 +0000 UTC m=+3.145173874" lastFinishedPulling="2026-04-16 18:30:36.3558321 +0000 UTC m=+20.333915538" observedRunningTime="2026-04-16 18:30:37.774634599 +0000 UTC m=+21.752718041" watchObservedRunningTime="2026-04-16 18:30:37.774756887 +0000 UTC m=+21.752840333" Apr 16 18:30:38.577830 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:38.577725 2569 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock","Timestamp":"2026-04-16T18:30:37.717319477Z","UUID":"0f9c57a4-5487-452a-8abf-b5543cafa6ec","Handler":null,"Name":"","Endpoint":""} Apr 16 18:30:38.580979 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:38.580956 2569 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: ebs.csi.aws.com endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock versions: 1.0.0 Apr 16 18:30:38.580979 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:38.580984 2569 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: ebs.csi.aws.com at endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock Apr 16 18:30:38.643648 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:38.643591 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kk4tm" Apr 16 18:30:38.643840 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:38.643657 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-tfkdr" Apr 16 18:30:38.643840 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:38.643766 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kk4tm" podUID="dc3c5cbb-7bc5-4228-88bf-021a899d1e57" Apr 16 18:30:38.643946 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:38.643896 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-tfkdr" podUID="687e1330-7999-4eea-a8c8-b11fd9d8448f" Apr 16 18:30:38.769923 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:38.769835 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gfcxg" event={"ID":"38147eb3-233a-48ee-ac14-02eabe278c0a","Type":"ContainerStarted","Data":"2637a7abfc708d92abf03702383bf6512b777663ee7df3f854401ded0da66b8b"} Apr 16 18:30:38.796177 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:38.796117 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gfcxg" podStartSLOduration=2.5041516550000003 podStartE2EDuration="21.796097857s" podCreationTimestamp="2026-04-16 18:30:17 +0000 UTC" firstStartedPulling="2026-04-16 18:30:19.162140226 +0000 UTC m=+3.140223649" lastFinishedPulling="2026-04-16 18:30:38.454086414 +0000 UTC m=+22.432169851" observedRunningTime="2026-04-16 18:30:38.795910133 +0000 UTC m=+22.773993578" watchObservedRunningTime="2026-04-16 18:30:38.796097857 +0000 UTC m=+22.774181303" Apr 16 18:30:39.049120 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:39.049037 2569 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/konnectivity-agent-nx8s6" Apr 16 18:30:39.049703 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:39.049675 2569 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/konnectivity-agent-nx8s6" Apr 16 18:30:39.643550 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:39.643494 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-66tjb" Apr 16 18:30:39.643741 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:39.643636 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-66tjb" podUID="fd1a63ff-830c-4979-9f9d-bd6268584fbf" Apr 16 18:30:39.775166 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:39.775101 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" event={"ID":"bb2c10db-7942-42a5-a328-06839f22865c","Type":"ContainerStarted","Data":"ab7ea9c16b30804b028d3f7e85a9abb8efbc753a0d688d442d0d16e5f08f23d4"} Apr 16 18:30:40.643274 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:40.643244 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kk4tm" Apr 16 18:30:40.643480 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:40.643392 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kk4tm" podUID="dc3c5cbb-7bc5-4228-88bf-021a899d1e57" Apr 16 18:30:40.643480 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:40.643446 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-tfkdr" Apr 16 18:30:40.643602 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:40.643545 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-tfkdr" podUID="687e1330-7999-4eea-a8c8-b11fd9d8448f" Apr 16 18:30:41.643566 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:41.643487 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-66tjb" Apr 16 18:30:41.644006 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:41.643601 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-66tjb" podUID="fd1a63ff-830c-4979-9f9d-bd6268584fbf" Apr 16 18:30:42.646203 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:42.646172 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-tfkdr" Apr 16 18:30:42.646871 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:42.646179 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kk4tm" Apr 16 18:30:42.646871 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:42.646269 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-tfkdr" podUID="687e1330-7999-4eea-a8c8-b11fd9d8448f" Apr 16 18:30:42.646871 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:42.646352 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kk4tm" podUID="dc3c5cbb-7bc5-4228-88bf-021a899d1e57" Apr 16 18:30:42.782929 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:42.782894 2569 generic.go:358] "Generic (PLEG): container finished" podID="c9036c3c-a41d-405f-acbf-c30968863203" containerID="7e24f53a70b50fa19133102503de5d2f6fc8bd51164822349fa373fefdf508cb" exitCode=0 Apr 16 18:30:42.783063 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:42.782982 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d2jts" event={"ID":"c9036c3c-a41d-405f-acbf-c30968863203","Type":"ContainerDied","Data":"7e24f53a70b50fa19133102503de5d2f6fc8bd51164822349fa373fefdf508cb"} Apr 16 18:30:42.786168 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:42.786132 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" event={"ID":"bb2c10db-7942-42a5-a328-06839f22865c","Type":"ContainerStarted","Data":"eb1757ee1f9aab9304bbf03792fe47be1b4470c9a2abf2adcc3563fe1e16dbd2"} Apr 16 18:30:42.786455 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:42.786438 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:42.786517 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:42.786464 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:42.800910 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:42.800886 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:42.822005 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:42.821963 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" podStartSLOduration=9.55633104 podStartE2EDuration="26.821949439s" podCreationTimestamp="2026-04-16 18:30:16 +0000 UTC" firstStartedPulling="2026-04-16 18:30:19.162261724 +0000 UTC m=+3.140345163" lastFinishedPulling="2026-04-16 18:30:36.427880139 +0000 UTC m=+20.405963562" observedRunningTime="2026-04-16 18:30:42.821607374 +0000 UTC m=+26.799690818" watchObservedRunningTime="2026-04-16 18:30:42.821949439 +0000 UTC m=+26.800032896" Apr 16 18:30:43.643521 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:43.643243 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-66tjb" Apr 16 18:30:43.643521 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:43.643511 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-66tjb" podUID="fd1a63ff-830c-4979-9f9d-bd6268584fbf" Apr 16 18:30:43.711156 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:43.711127 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-66tjb"] Apr 16 18:30:43.714117 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:43.714096 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-tfkdr"] Apr 16 18:30:43.714237 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:43.714222 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-tfkdr" Apr 16 18:30:43.714414 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:43.714328 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-tfkdr" podUID="687e1330-7999-4eea-a8c8-b11fd9d8448f" Apr 16 18:30:43.714825 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:43.714806 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kk4tm"] Apr 16 18:30:43.714915 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:43.714904 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kk4tm" Apr 16 18:30:43.715008 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:43.714992 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kk4tm" podUID="dc3c5cbb-7bc5-4228-88bf-021a899d1e57" Apr 16 18:30:43.789431 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:43.789395 2569 generic.go:358] "Generic (PLEG): container finished" podID="c9036c3c-a41d-405f-acbf-c30968863203" containerID="0b03f16e95d8fa7067e8dec388de866cb1097c3279660f63116d1f4551685b64" exitCode=0 Apr 16 18:30:43.789578 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:43.789512 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d2jts" event={"ID":"c9036c3c-a41d-405f-acbf-c30968863203","Type":"ContainerDied","Data":"0b03f16e95d8fa7067e8dec388de866cb1097c3279660f63116d1f4551685b64"} Apr 16 18:30:43.789791 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:43.789774 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-66tjb" Apr 16 18:30:43.790882 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:43.790309 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:43.790882 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:43.790306 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-66tjb" podUID="fd1a63ff-830c-4979-9f9d-bd6268584fbf" Apr 16 18:30:43.804201 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:43.804179 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:30:44.793740 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:44.793702 2569 generic.go:358] "Generic (PLEG): container finished" podID="c9036c3c-a41d-405f-acbf-c30968863203" containerID="c14576efd070d61e30bfb2386cdb5f8e20268edf382b9fe746912a4c59f2833b" exitCode=0 Apr 16 18:30:44.794229 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:44.793786 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d2jts" event={"ID":"c9036c3c-a41d-405f-acbf-c30968863203","Type":"ContainerDied","Data":"c14576efd070d61e30bfb2386cdb5f8e20268edf382b9fe746912a4c59f2833b"} Apr 16 18:30:45.644105 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:45.644026 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-tfkdr" Apr 16 18:30:45.644105 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:45.644049 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kk4tm" Apr 16 18:30:45.644299 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:45.644026 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-66tjb" Apr 16 18:30:45.644299 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:45.644146 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-tfkdr" podUID="687e1330-7999-4eea-a8c8-b11fd9d8448f" Apr 16 18:30:45.644299 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:45.644228 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-66tjb" podUID="fd1a63ff-830c-4979-9f9d-bd6268584fbf" Apr 16 18:30:45.644474 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:45.644317 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kk4tm" podUID="dc3c5cbb-7bc5-4228-88bf-021a899d1e57" Apr 16 18:30:47.616211 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:47.616177 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kube-system/konnectivity-agent-nx8s6" Apr 16 18:30:47.616649 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:47.616356 2569 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 18:30:47.616841 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:47.616817 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/konnectivity-agent-nx8s6" Apr 16 18:30:47.644072 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:47.644046 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kk4tm" Apr 16 18:30:47.644204 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:47.644054 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-tfkdr" Apr 16 18:30:47.644204 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:47.644148 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kk4tm" podUID="dc3c5cbb-7bc5-4228-88bf-021a899d1e57" Apr 16 18:30:47.644204 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:47.644164 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-66tjb" Apr 16 18:30:47.644456 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:47.644221 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-tfkdr" podUID="687e1330-7999-4eea-a8c8-b11fd9d8448f" Apr 16 18:30:47.644456 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:47.644325 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-66tjb" podUID="fd1a63ff-830c-4979-9f9d-bd6268584fbf" Apr 16 18:30:49.332044 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.332012 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-132-14.ec2.internal" event="NodeReady" Apr 16 18:30:49.332438 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.332149 2569 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Apr 16 18:30:49.364868 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.364799 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-5cb6cf4cb4-p8jnc"] Apr 16 18:30:49.391244 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.391209 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-68699947db-2vcnw"] Apr 16 18:30:49.391436 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.391385 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-p8jnc" Apr 16 18:30:49.393313 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.393203 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Apr 16 18:30:49.393313 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.393306 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Apr 16 18:30:49.393530 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.393514 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"default-dockercfg-4rwc8\"" Apr 16 18:30:49.409056 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.408880 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-778f479cf5-qffzb"] Apr 16 18:30:49.409198 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.409020 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:30:49.411304 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.411239 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Apr 16 18:30:49.411304 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.411246 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-r7rs7\"" Apr 16 18:30:49.411304 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.411277 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Apr 16 18:30:49.411304 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.411280 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-private-configuration\"" Apr 16 18:30:49.416725 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.416463 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Apr 16 18:30:49.423167 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.423146 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-747745957c-f74wb"] Apr 16 18:30:49.423318 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.423299 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-778f479cf5-qffzb" Apr 16 18:30:49.425530 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.425510 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"open-cluster-management-image-pull-credentials\"" Apr 16 18:30:49.425530 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.425518 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"managed-serviceaccount-hub-kubeconfig\"" Apr 16 18:30:49.426502 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.425823 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"open-cluster-management-agent-addon\"/\"kube-root-ca.crt\"" Apr 16 18:30:49.426502 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.425890 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"managed-serviceaccount-dockercfg-6g5w8\"" Apr 16 18:30:49.426502 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.426380 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"open-cluster-management-agent-addon\"/\"openshift-service-ca.crt\"" Apr 16 18:30:49.439879 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.439856 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["open-cluster-management-agent-addon/cluster-proxy-proxy-agent-58b579794f-dprjc"] Apr 16 18:30:49.440041 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.440021 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-747745957c-f74wb" Apr 16 18:30:49.442972 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.442917 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"work-manager-hub-kubeconfig\"" Apr 16 18:30:49.458958 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.458935 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-5cb6cf4cb4-p8jnc"] Apr 16 18:30:49.458958 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.458963 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-778f479cf5-qffzb"] Apr 16 18:30:49.459144 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.458974 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-747745957c-f74wb"] Apr 16 18:30:49.459144 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.458982 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/cluster-proxy-proxy-agent-58b579794f-dprjc"] Apr 16 18:30:49.459144 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.458990 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-68699947db-2vcnw"] Apr 16 18:30:49.459144 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.459001 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-z87hx"] Apr 16 18:30:49.459144 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.459074 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-58b579794f-dprjc" Apr 16 18:30:49.461197 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.461177 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-ca\"" Apr 16 18:30:49.461309 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.461287 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-service-proxy-server-certificates\"" Apr 16 18:30:49.461392 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.461376 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-open-cluster-management.io-proxy-agent-signer-client-cert\"" Apr 16 18:30:49.461476 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.461458 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-hub-kubeconfig\"" Apr 16 18:30:49.474850 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.474828 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-z87hx"] Apr 16 18:30:49.474973 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.474957 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-z87hx" Apr 16 18:30:49.476972 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.476954 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Apr 16 18:30:49.477300 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.477179 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-6c796\"" Apr 16 18:30:49.477300 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.477194 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Apr 16 18:30:49.477300 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.477186 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Apr 16 18:30:49.481416 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.481398 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-wq9r5"] Apr 16 18:30:49.495476 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.495453 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wq9r5"] Apr 16 18:30:49.495629 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.495594 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wq9r5" Apr 16 18:30:49.498368 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.497982 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-tdfb9\"" Apr 16 18:30:49.498368 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.498203 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Apr 16 18:30:49.498560 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.498380 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Apr 16 18:30:49.556292 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.556249 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/e1716530-3a79-4ef5-bd3c-0909772664d6-image-registry-private-configuration\") pod \"image-registry-68699947db-2vcnw\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:30:49.556292 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.556297 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e1716530-3a79-4ef5-bd3c-0909772664d6-trusted-ca\") pod \"image-registry-68699947db-2vcnw\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:30:49.556550 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.556354 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/37a97773-82b0-4bd2-bc25-bc3330347365-klusterlet-config\") pod \"klusterlet-addon-workmgr-747745957c-f74wb\" (UID: \"37a97773-82b0-4bd2-bc25-bc3330347365\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-747745957c-f74wb" Apr 16 18:30:49.556550 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.556380 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-bound-sa-token\") pod \"image-registry-68699947db-2vcnw\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:30:49.556550 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.556404 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdkdk\" (UniqueName: \"kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-kube-api-access-zdkdk\") pod \"image-registry-68699947db-2vcnw\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:30:49.556550 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.556468 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca\" (UniqueName: \"kubernetes.io/secret/e8c054fa-1f67-4b74-9059-ed94490a803e-ca\") pod \"cluster-proxy-proxy-agent-58b579794f-dprjc\" (UID: \"e8c054fa-1f67-4b74-9059-ed94490a803e\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-58b579794f-dprjc" Apr 16 18:30:49.556550 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.556514 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/e8c054fa-1f67-4b74-9059-ed94490a803e-hub-kubeconfig\") pod \"cluster-proxy-proxy-agent-58b579794f-dprjc\" (UID: \"e8c054fa-1f67-4b74-9059-ed94490a803e\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-58b579794f-dprjc" Apr 16 18:30:49.556550 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.556547 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjdrv\" (UniqueName: \"kubernetes.io/projected/37a97773-82b0-4bd2-bc25-bc3330347365-kube-api-access-kjdrv\") pod \"klusterlet-addon-workmgr-747745957c-f74wb\" (UID: \"37a97773-82b0-4bd2-bc25-bc3330347365\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-747745957c-f74wb" Apr 16 18:30:49.556839 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.556576 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-proxy-server-cert\" (UniqueName: \"kubernetes.io/secret/e8c054fa-1f67-4b74-9059-ed94490a803e-service-proxy-server-cert\") pod \"cluster-proxy-proxy-agent-58b579794f-dprjc\" (UID: \"e8c054fa-1f67-4b74-9059-ed94490a803e\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-58b579794f-dprjc" Apr 16 18:30:49.556839 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.556602 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e1716530-3a79-4ef5-bd3c-0909772664d6-installation-pull-secrets\") pod \"image-registry-68699947db-2vcnw\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:30:49.556839 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.556627 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hub\" (UniqueName: \"kubernetes.io/secret/e8c054fa-1f67-4b74-9059-ed94490a803e-hub\") pod \"cluster-proxy-proxy-agent-58b579794f-dprjc\" (UID: \"e8c054fa-1f67-4b74-9059-ed94490a803e\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-58b579794f-dprjc" Apr 16 18:30:49.556839 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.556655 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mfqz\" (UniqueName: \"kubernetes.io/projected/8db23076-0658-4e7c-aab7-30f06e2174dc-kube-api-access-2mfqz\") pod \"ingress-canary-z87hx\" (UID: \"8db23076-0658-4e7c-aab7-30f06e2174dc\") " pod="openshift-ingress-canary/ingress-canary-z87hx" Apr 16 18:30:49.556839 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.556749 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-registry-tls\") pod \"image-registry-68699947db-2vcnw\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:30:49.556839 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.556807 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/d43a0fd9-7414-4a3a-b051-b1c0acbb4c00-hub-kubeconfig\") pod \"managed-serviceaccount-addon-agent-778f479cf5-qffzb\" (UID: \"d43a0fd9-7414-4a3a-b051-b1c0acbb4c00\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-778f479cf5-qffzb" Apr 16 18:30:49.557103 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.556852 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w82d7\" (UniqueName: \"kubernetes.io/projected/d43a0fd9-7414-4a3a-b051-b1c0acbb4c00-kube-api-access-w82d7\") pod \"managed-serviceaccount-addon-agent-778f479cf5-qffzb\" (UID: \"d43a0fd9-7414-4a3a-b051-b1c0acbb4c00\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-778f479cf5-qffzb" Apr 16 18:30:49.557103 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.556888 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9d04db45-c40a-4deb-a86e-03e77a3b560e-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-p8jnc\" (UID: \"9d04db45-c40a-4deb-a86e-03e77a3b560e\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-p8jnc" Apr 16 18:30:49.557103 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.556919 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e1716530-3a79-4ef5-bd3c-0909772664d6-ca-trust-extracted\") pod \"image-registry-68699947db-2vcnw\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:30:49.557103 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.557006 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/37a97773-82b0-4bd2-bc25-bc3330347365-tmp\") pod \"klusterlet-addon-workmgr-747745957c-f74wb\" (UID: \"37a97773-82b0-4bd2-bc25-bc3330347365\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-747745957c-f74wb" Apr 16 18:30:49.557103 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.557043 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e1716530-3a79-4ef5-bd3c-0909772664d6-registry-certificates\") pod \"image-registry-68699947db-2vcnw\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:30:49.557103 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.557063 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8666c\" (UniqueName: \"kubernetes.io/projected/e8c054fa-1f67-4b74-9059-ed94490a803e-kube-api-access-8666c\") pod \"cluster-proxy-proxy-agent-58b579794f-dprjc\" (UID: \"e8c054fa-1f67-4b74-9059-ed94490a803e\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-58b579794f-dprjc" Apr 16 18:30:49.557408 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.557133 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ocpservice-ca\" (UniqueName: \"kubernetes.io/configmap/e8c054fa-1f67-4b74-9059-ed94490a803e-ocpservice-ca\") pod \"cluster-proxy-proxy-agent-58b579794f-dprjc\" (UID: \"e8c054fa-1f67-4b74-9059-ed94490a803e\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-58b579794f-dprjc" Apr 16 18:30:49.557408 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.557161 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8db23076-0658-4e7c-aab7-30f06e2174dc-cert\") pod \"ingress-canary-z87hx\" (UID: \"8db23076-0658-4e7c-aab7-30f06e2174dc\") " pod="openshift-ingress-canary/ingress-canary-z87hx" Apr 16 18:30:49.557408 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.557186 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9d04db45-c40a-4deb-a86e-03e77a3b560e-nginx-conf\") pod \"networking-console-plugin-5cb6cf4cb4-p8jnc\" (UID: \"9d04db45-c40a-4deb-a86e-03e77a3b560e\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-p8jnc" Apr 16 18:30:49.643646 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.643569 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-tfkdr" Apr 16 18:30:49.643807 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.643569 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kk4tm" Apr 16 18:30:49.643807 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.643570 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-66tjb" Apr 16 18:30:49.645887 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.645863 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 16 18:30:49.646019 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.645870 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"original-pull-secret\"" Apr 16 18:30:49.646019 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.646002 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-kxnrw\"" Apr 16 18:30:49.646143 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.646114 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 16 18:30:49.646188 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.646176 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 16 18:30:49.646228 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.646212 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-7lgq2\"" Apr 16 18:30:49.658357 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.658310 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ocpservice-ca\" (UniqueName: \"kubernetes.io/configmap/e8c054fa-1f67-4b74-9059-ed94490a803e-ocpservice-ca\") pod \"cluster-proxy-proxy-agent-58b579794f-dprjc\" (UID: \"e8c054fa-1f67-4b74-9059-ed94490a803e\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-58b579794f-dprjc" Apr 16 18:30:49.658484 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.658372 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9d04db45-c40a-4deb-a86e-03e77a3b560e-nginx-conf\") pod \"networking-console-plugin-5cb6cf4cb4-p8jnc\" (UID: \"9d04db45-c40a-4deb-a86e-03e77a3b560e\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-p8jnc" Apr 16 18:30:49.658484 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.658406 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/e1716530-3a79-4ef5-bd3c-0909772664d6-image-registry-private-configuration\") pod \"image-registry-68699947db-2vcnw\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:30:49.658484 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.658433 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-bound-sa-token\") pod \"image-registry-68699947db-2vcnw\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:30:49.658484 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.658460 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca\" (UniqueName: \"kubernetes.io/secret/e8c054fa-1f67-4b74-9059-ed94490a803e-ca\") pod \"cluster-proxy-proxy-agent-58b579794f-dprjc\" (UID: \"e8c054fa-1f67-4b74-9059-ed94490a803e\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-58b579794f-dprjc" Apr 16 18:30:49.658691 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.658492 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4561dc6f-93f8-48ae-a46a-8ae75f78fdb1-metrics-tls\") pod \"dns-default-wq9r5\" (UID: \"4561dc6f-93f8-48ae-a46a-8ae75f78fdb1\") " pod="openshift-dns/dns-default-wq9r5" Apr 16 18:30:49.658691 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.658534 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/37a97773-82b0-4bd2-bc25-bc3330347365-klusterlet-config\") pod \"klusterlet-addon-workmgr-747745957c-f74wb\" (UID: \"37a97773-82b0-4bd2-bc25-bc3330347365\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-747745957c-f74wb" Apr 16 18:30:49.658691 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.658560 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/e8c054fa-1f67-4b74-9059-ed94490a803e-hub-kubeconfig\") pod \"cluster-proxy-proxy-agent-58b579794f-dprjc\" (UID: \"e8c054fa-1f67-4b74-9059-ed94490a803e\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-58b579794f-dprjc" Apr 16 18:30:49.658691 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.658589 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-proxy-server-cert\" (UniqueName: \"kubernetes.io/secret/e8c054fa-1f67-4b74-9059-ed94490a803e-service-proxy-server-cert\") pod \"cluster-proxy-proxy-agent-58b579794f-dprjc\" (UID: \"e8c054fa-1f67-4b74-9059-ed94490a803e\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-58b579794f-dprjc" Apr 16 18:30:49.658691 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.658618 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hub\" (UniqueName: \"kubernetes.io/secret/e8c054fa-1f67-4b74-9059-ed94490a803e-hub\") pod \"cluster-proxy-proxy-agent-58b579794f-dprjc\" (UID: \"e8c054fa-1f67-4b74-9059-ed94490a803e\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-58b579794f-dprjc" Apr 16 18:30:49.658691 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.658643 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-registry-tls\") pod \"image-registry-68699947db-2vcnw\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:30:49.658691 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.658673 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9d04db45-c40a-4deb-a86e-03e77a3b560e-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-p8jnc\" (UID: \"9d04db45-c40a-4deb-a86e-03e77a3b560e\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-p8jnc" Apr 16 18:30:49.658977 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.658698 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e1716530-3a79-4ef5-bd3c-0909772664d6-ca-trust-extracted\") pod \"image-registry-68699947db-2vcnw\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:30:49.659038 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.659008 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ocpservice-ca\" (UniqueName: \"kubernetes.io/configmap/e8c054fa-1f67-4b74-9059-ed94490a803e-ocpservice-ca\") pod \"cluster-proxy-proxy-agent-58b579794f-dprjc\" (UID: \"e8c054fa-1f67-4b74-9059-ed94490a803e\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-58b579794f-dprjc" Apr 16 18:30:49.659091 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.659063 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e1716530-3a79-4ef5-bd3c-0909772664d6-ca-trust-extracted\") pod \"image-registry-68699947db-2vcnw\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:30:49.659091 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.659081 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9d04db45-c40a-4deb-a86e-03e77a3b560e-nginx-conf\") pod \"networking-console-plugin-5cb6cf4cb4-p8jnc\" (UID: \"9d04db45-c40a-4deb-a86e-03e77a3b560e\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-p8jnc" Apr 16 18:30:49.659191 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:49.659180 2569 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 18:30:49.659354 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:49.659195 2569 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-68699947db-2vcnw: secret "image-registry-tls" not found Apr 16 18:30:49.659354 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:49.659254 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-registry-tls podName:e1716530-3a79-4ef5-bd3c-0909772664d6 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:50.159227605 +0000 UTC m=+34.137311030 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-registry-tls") pod "image-registry-68699947db-2vcnw" (UID: "e1716530-3a79-4ef5-bd3c-0909772664d6") : secret "image-registry-tls" not found Apr 16 18:30:49.662719 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:49.659921 2569 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 16 18:30:49.662719 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:49.659986 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d04db45-c40a-4deb-a86e-03e77a3b560e-networking-console-plugin-cert podName:9d04db45-c40a-4deb-a86e-03e77a3b560e nodeName:}" failed. No retries permitted until 2026-04-16 18:30:50.15996905 +0000 UTC m=+34.138052486 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/9d04db45-c40a-4deb-a86e-03e77a3b560e-networking-console-plugin-cert") pod "networking-console-plugin-5cb6cf4cb4-p8jnc" (UID: "9d04db45-c40a-4deb-a86e-03e77a3b560e") : secret "networking-console-plugin-cert" not found Apr 16 18:30:49.662719 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.661142 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4561dc6f-93f8-48ae-a46a-8ae75f78fdb1-tmp-dir\") pod \"dns-default-wq9r5\" (UID: \"4561dc6f-93f8-48ae-a46a-8ae75f78fdb1\") " pod="openshift-dns/dns-default-wq9r5" Apr 16 18:30:49.662719 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.661190 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/d43a0fd9-7414-4a3a-b051-b1c0acbb4c00-hub-kubeconfig\") pod \"managed-serviceaccount-addon-agent-778f479cf5-qffzb\" (UID: \"d43a0fd9-7414-4a3a-b051-b1c0acbb4c00\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-778f479cf5-qffzb" Apr 16 18:30:49.662719 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.661239 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8666c\" (UniqueName: \"kubernetes.io/projected/e8c054fa-1f67-4b74-9059-ed94490a803e-kube-api-access-8666c\") pod \"cluster-proxy-proxy-agent-58b579794f-dprjc\" (UID: \"e8c054fa-1f67-4b74-9059-ed94490a803e\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-58b579794f-dprjc" Apr 16 18:30:49.662719 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.661270 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4561dc6f-93f8-48ae-a46a-8ae75f78fdb1-config-volume\") pod \"dns-default-wq9r5\" (UID: \"4561dc6f-93f8-48ae-a46a-8ae75f78fdb1\") " pod="openshift-dns/dns-default-wq9r5" Apr 16 18:30:49.662719 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.661298 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8db23076-0658-4e7c-aab7-30f06e2174dc-cert\") pod \"ingress-canary-z87hx\" (UID: \"8db23076-0658-4e7c-aab7-30f06e2174dc\") " pod="openshift-ingress-canary/ingress-canary-z87hx" Apr 16 18:30:49.662719 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.661348 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zdkdk\" (UniqueName: \"kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-kube-api-access-zdkdk\") pod \"image-registry-68699947db-2vcnw\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:30:49.662719 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.661377 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fstxn\" (UniqueName: \"kubernetes.io/projected/4561dc6f-93f8-48ae-a46a-8ae75f78fdb1-kube-api-access-fstxn\") pod \"dns-default-wq9r5\" (UID: \"4561dc6f-93f8-48ae-a46a-8ae75f78fdb1\") " pod="openshift-dns/dns-default-wq9r5" Apr 16 18:30:49.662719 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.661409 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e1716530-3a79-4ef5-bd3c-0909772664d6-trusted-ca\") pod \"image-registry-68699947db-2vcnw\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:30:49.662719 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.661446 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kjdrv\" (UniqueName: \"kubernetes.io/projected/37a97773-82b0-4bd2-bc25-bc3330347365-kube-api-access-kjdrv\") pod \"klusterlet-addon-workmgr-747745957c-f74wb\" (UID: \"37a97773-82b0-4bd2-bc25-bc3330347365\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-747745957c-f74wb" Apr 16 18:30:49.662719 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.661471 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2mfqz\" (UniqueName: \"kubernetes.io/projected/8db23076-0658-4e7c-aab7-30f06e2174dc-kube-api-access-2mfqz\") pod \"ingress-canary-z87hx\" (UID: \"8db23076-0658-4e7c-aab7-30f06e2174dc\") " pod="openshift-ingress-canary/ingress-canary-z87hx" Apr 16 18:30:49.662719 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.661496 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e1716530-3a79-4ef5-bd3c-0909772664d6-installation-pull-secrets\") pod \"image-registry-68699947db-2vcnw\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:30:49.662719 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.661554 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w82d7\" (UniqueName: \"kubernetes.io/projected/d43a0fd9-7414-4a3a-b051-b1c0acbb4c00-kube-api-access-w82d7\") pod \"managed-serviceaccount-addon-agent-778f479cf5-qffzb\" (UID: \"d43a0fd9-7414-4a3a-b051-b1c0acbb4c00\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-778f479cf5-qffzb" Apr 16 18:30:49.662719 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:49.661581 2569 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 18:30:49.662719 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.661604 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/37a97773-82b0-4bd2-bc25-bc3330347365-tmp\") pod \"klusterlet-addon-workmgr-747745957c-f74wb\" (UID: \"37a97773-82b0-4bd2-bc25-bc3330347365\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-747745957c-f74wb" Apr 16 18:30:49.663643 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:49.661632 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8db23076-0658-4e7c-aab7-30f06e2174dc-cert podName:8db23076-0658-4e7c-aab7-30f06e2174dc nodeName:}" failed. No retries permitted until 2026-04-16 18:30:50.161615703 +0000 UTC m=+34.139699143 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8db23076-0658-4e7c-aab7-30f06e2174dc-cert") pod "ingress-canary-z87hx" (UID: "8db23076-0658-4e7c-aab7-30f06e2174dc") : secret "canary-serving-cert" not found Apr 16 18:30:49.663643 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.661668 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e1716530-3a79-4ef5-bd3c-0909772664d6-registry-certificates\") pod \"image-registry-68699947db-2vcnw\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:30:49.663643 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.661928 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/37a97773-82b0-4bd2-bc25-bc3330347365-tmp\") pod \"klusterlet-addon-workmgr-747745957c-f74wb\" (UID: \"37a97773-82b0-4bd2-bc25-bc3330347365\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-747745957c-f74wb" Apr 16 18:30:49.663643 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.662201 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e1716530-3a79-4ef5-bd3c-0909772664d6-registry-certificates\") pod \"image-registry-68699947db-2vcnw\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:30:49.663861 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.663669 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca\" (UniqueName: \"kubernetes.io/secret/e8c054fa-1f67-4b74-9059-ed94490a803e-ca\") pod \"cluster-proxy-proxy-agent-58b579794f-dprjc\" (UID: \"e8c054fa-1f67-4b74-9059-ed94490a803e\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-58b579794f-dprjc" Apr 16 18:30:49.663861 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.663782 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e1716530-3a79-4ef5-bd3c-0909772664d6-trusted-ca\") pod \"image-registry-68699947db-2vcnw\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:30:49.664033 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.664007 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-proxy-server-cert\" (UniqueName: \"kubernetes.io/secret/e8c054fa-1f67-4b74-9059-ed94490a803e-service-proxy-server-cert\") pod \"cluster-proxy-proxy-agent-58b579794f-dprjc\" (UID: \"e8c054fa-1f67-4b74-9059-ed94490a803e\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-58b579794f-dprjc" Apr 16 18:30:49.664505 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.664481 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hub\" (UniqueName: \"kubernetes.io/secret/e8c054fa-1f67-4b74-9059-ed94490a803e-hub\") pod \"cluster-proxy-proxy-agent-58b579794f-dprjc\" (UID: \"e8c054fa-1f67-4b74-9059-ed94490a803e\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-58b579794f-dprjc" Apr 16 18:30:49.665023 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.665004 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/e1716530-3a79-4ef5-bd3c-0909772664d6-image-registry-private-configuration\") pod \"image-registry-68699947db-2vcnw\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:30:49.665113 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.665077 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/e8c054fa-1f67-4b74-9059-ed94490a803e-hub-kubeconfig\") pod \"cluster-proxy-proxy-agent-58b579794f-dprjc\" (UID: \"e8c054fa-1f67-4b74-9059-ed94490a803e\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-58b579794f-dprjc" Apr 16 18:30:49.666096 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.666064 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/37a97773-82b0-4bd2-bc25-bc3330347365-klusterlet-config\") pod \"klusterlet-addon-workmgr-747745957c-f74wb\" (UID: \"37a97773-82b0-4bd2-bc25-bc3330347365\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-747745957c-f74wb" Apr 16 18:30:49.666375 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.666355 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e1716530-3a79-4ef5-bd3c-0909772664d6-installation-pull-secrets\") pod \"image-registry-68699947db-2vcnw\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:30:49.666439 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.666382 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/d43a0fd9-7414-4a3a-b051-b1c0acbb4c00-hub-kubeconfig\") pod \"managed-serviceaccount-addon-agent-778f479cf5-qffzb\" (UID: \"d43a0fd9-7414-4a3a-b051-b1c0acbb4c00\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-778f479cf5-qffzb" Apr 16 18:30:49.669254 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.669233 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-bound-sa-token\") pod \"image-registry-68699947db-2vcnw\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:30:49.673061 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.673001 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8666c\" (UniqueName: \"kubernetes.io/projected/e8c054fa-1f67-4b74-9059-ed94490a803e-kube-api-access-8666c\") pod \"cluster-proxy-proxy-agent-58b579794f-dprjc\" (UID: \"e8c054fa-1f67-4b74-9059-ed94490a803e\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-58b579794f-dprjc" Apr 16 18:30:49.673299 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.673279 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjdrv\" (UniqueName: \"kubernetes.io/projected/37a97773-82b0-4bd2-bc25-bc3330347365-kube-api-access-kjdrv\") pod \"klusterlet-addon-workmgr-747745957c-f74wb\" (UID: \"37a97773-82b0-4bd2-bc25-bc3330347365\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-747745957c-f74wb" Apr 16 18:30:49.674126 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.674082 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mfqz\" (UniqueName: \"kubernetes.io/projected/8db23076-0658-4e7c-aab7-30f06e2174dc-kube-api-access-2mfqz\") pod \"ingress-canary-z87hx\" (UID: \"8db23076-0658-4e7c-aab7-30f06e2174dc\") " pod="openshift-ingress-canary/ingress-canary-z87hx" Apr 16 18:30:49.674646 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.674615 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w82d7\" (UniqueName: \"kubernetes.io/projected/d43a0fd9-7414-4a3a-b051-b1c0acbb4c00-kube-api-access-w82d7\") pod \"managed-serviceaccount-addon-agent-778f479cf5-qffzb\" (UID: \"d43a0fd9-7414-4a3a-b051-b1c0acbb4c00\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-778f479cf5-qffzb" Apr 16 18:30:49.675530 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.675511 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdkdk\" (UniqueName: \"kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-kube-api-access-zdkdk\") pod \"image-registry-68699947db-2vcnw\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:30:49.744791 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.744753 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-778f479cf5-qffzb" Apr 16 18:30:49.753596 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.753566 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-747745957c-f74wb" Apr 16 18:30:49.762514 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.762488 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4561dc6f-93f8-48ae-a46a-8ae75f78fdb1-tmp-dir\") pod \"dns-default-wq9r5\" (UID: \"4561dc6f-93f8-48ae-a46a-8ae75f78fdb1\") " pod="openshift-dns/dns-default-wq9r5" Apr 16 18:30:49.762619 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.762543 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4561dc6f-93f8-48ae-a46a-8ae75f78fdb1-config-volume\") pod \"dns-default-wq9r5\" (UID: \"4561dc6f-93f8-48ae-a46a-8ae75f78fdb1\") " pod="openshift-dns/dns-default-wq9r5" Apr 16 18:30:49.762725 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.762698 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fstxn\" (UniqueName: \"kubernetes.io/projected/4561dc6f-93f8-48ae-a46a-8ae75f78fdb1-kube-api-access-fstxn\") pod \"dns-default-wq9r5\" (UID: \"4561dc6f-93f8-48ae-a46a-8ae75f78fdb1\") " pod="openshift-dns/dns-default-wq9r5" Apr 16 18:30:49.762839 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.762823 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4561dc6f-93f8-48ae-a46a-8ae75f78fdb1-metrics-tls\") pod \"dns-default-wq9r5\" (UID: \"4561dc6f-93f8-48ae-a46a-8ae75f78fdb1\") " pod="openshift-dns/dns-default-wq9r5" Apr 16 18:30:49.762960 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:49.762929 2569 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 18:30:49.763007 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:49.762988 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4561dc6f-93f8-48ae-a46a-8ae75f78fdb1-metrics-tls podName:4561dc6f-93f8-48ae-a46a-8ae75f78fdb1 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:50.262975406 +0000 UTC m=+34.241058828 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4561dc6f-93f8-48ae-a46a-8ae75f78fdb1-metrics-tls") pod "dns-default-wq9r5" (UID: "4561dc6f-93f8-48ae-a46a-8ae75f78fdb1") : secret "dns-default-metrics-tls" not found Apr 16 18:30:49.763063 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.762928 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4561dc6f-93f8-48ae-a46a-8ae75f78fdb1-tmp-dir\") pod \"dns-default-wq9r5\" (UID: \"4561dc6f-93f8-48ae-a46a-8ae75f78fdb1\") " pod="openshift-dns/dns-default-wq9r5" Apr 16 18:30:49.763236 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.763214 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4561dc6f-93f8-48ae-a46a-8ae75f78fdb1-config-volume\") pod \"dns-default-wq9r5\" (UID: \"4561dc6f-93f8-48ae-a46a-8ae75f78fdb1\") " pod="openshift-dns/dns-default-wq9r5" Apr 16 18:30:49.768452 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.768432 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-58b579794f-dprjc" Apr 16 18:30:49.790232 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:49.790202 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fstxn\" (UniqueName: \"kubernetes.io/projected/4561dc6f-93f8-48ae-a46a-8ae75f78fdb1-kube-api-access-fstxn\") pod \"dns-default-wq9r5\" (UID: \"4561dc6f-93f8-48ae-a46a-8ae75f78fdb1\") " pod="openshift-dns/dns-default-wq9r5" Apr 16 18:30:50.165611 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:50.165573 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-registry-tls\") pod \"image-registry-68699947db-2vcnw\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:30:50.165842 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:50.165624 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9d04db45-c40a-4deb-a86e-03e77a3b560e-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-p8jnc\" (UID: \"9d04db45-c40a-4deb-a86e-03e77a3b560e\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-p8jnc" Apr 16 18:30:50.165842 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:50.165675 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8db23076-0658-4e7c-aab7-30f06e2174dc-cert\") pod \"ingress-canary-z87hx\" (UID: \"8db23076-0658-4e7c-aab7-30f06e2174dc\") " pod="openshift-ingress-canary/ingress-canary-z87hx" Apr 16 18:30:50.165842 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:50.165736 2569 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 18:30:50.165842 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:50.165763 2569 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-68699947db-2vcnw: secret "image-registry-tls" not found Apr 16 18:30:50.165842 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:50.165788 2569 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 18:30:50.165842 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:50.165796 2569 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 16 18:30:50.165842 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:50.165834 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-registry-tls podName:e1716530-3a79-4ef5-bd3c-0909772664d6 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:51.165813471 +0000 UTC m=+35.143896895 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-registry-tls") pod "image-registry-68699947db-2vcnw" (UID: "e1716530-3a79-4ef5-bd3c-0909772664d6") : secret "image-registry-tls" not found Apr 16 18:30:50.166233 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:50.165857 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d04db45-c40a-4deb-a86e-03e77a3b560e-networking-console-plugin-cert podName:9d04db45-c40a-4deb-a86e-03e77a3b560e nodeName:}" failed. No retries permitted until 2026-04-16 18:30:51.165840915 +0000 UTC m=+35.143924338 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/9d04db45-c40a-4deb-a86e-03e77a3b560e-networking-console-plugin-cert") pod "networking-console-plugin-5cb6cf4cb4-p8jnc" (UID: "9d04db45-c40a-4deb-a86e-03e77a3b560e") : secret "networking-console-plugin-cert" not found Apr 16 18:30:50.166233 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:50.165878 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8db23076-0658-4e7c-aab7-30f06e2174dc-cert podName:8db23076-0658-4e7c-aab7-30f06e2174dc nodeName:}" failed. No retries permitted until 2026-04-16 18:30:51.165868831 +0000 UTC m=+35.143952254 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8db23076-0658-4e7c-aab7-30f06e2174dc-cert") pod "ingress-canary-z87hx" (UID: "8db23076-0658-4e7c-aab7-30f06e2174dc") : secret "canary-serving-cert" not found Apr 16 18:30:50.266404 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:50.266372 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4561dc6f-93f8-48ae-a46a-8ae75f78fdb1-metrics-tls\") pod \"dns-default-wq9r5\" (UID: \"4561dc6f-93f8-48ae-a46a-8ae75f78fdb1\") " pod="openshift-dns/dns-default-wq9r5" Apr 16 18:30:50.266565 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:50.266464 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dc3c5cbb-7bc5-4228-88bf-021a899d1e57-metrics-certs\") pod \"network-metrics-daemon-kk4tm\" (UID: \"dc3c5cbb-7bc5-4228-88bf-021a899d1e57\") " pod="openshift-multus/network-metrics-daemon-kk4tm" Apr 16 18:30:50.266565 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:50.266544 2569 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 18:30:50.266565 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:50.266555 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Apr 16 18:30:50.266706 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:50.266607 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc3c5cbb-7bc5-4228-88bf-021a899d1e57-metrics-certs podName:dc3c5cbb-7bc5-4228-88bf-021a899d1e57 nodeName:}" failed. No retries permitted until 2026-04-16 18:31:22.266593943 +0000 UTC m=+66.244677366 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dc3c5cbb-7bc5-4228-88bf-021a899d1e57-metrics-certs") pod "network-metrics-daemon-kk4tm" (UID: "dc3c5cbb-7bc5-4228-88bf-021a899d1e57") : secret "metrics-daemon-secret" not found Apr 16 18:30:50.266706 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:50.266621 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4561dc6f-93f8-48ae-a46a-8ae75f78fdb1-metrics-tls podName:4561dc6f-93f8-48ae-a46a-8ae75f78fdb1 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:51.266614365 +0000 UTC m=+35.244697788 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4561dc6f-93f8-48ae-a46a-8ae75f78fdb1-metrics-tls") pod "dns-default-wq9r5" (UID: "4561dc6f-93f8-48ae-a46a-8ae75f78fdb1") : secret "dns-default-metrics-tls" not found Apr 16 18:30:50.368408 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:50.367779 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qr9mp\" (UniqueName: \"kubernetes.io/projected/687e1330-7999-4eea-a8c8-b11fd9d8448f-kube-api-access-qr9mp\") pod \"network-check-target-tfkdr\" (UID: \"687e1330-7999-4eea-a8c8-b11fd9d8448f\") " pod="openshift-network-diagnostics/network-check-target-tfkdr" Apr 16 18:30:50.378481 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:50.378446 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr9mp\" (UniqueName: \"kubernetes.io/projected/687e1330-7999-4eea-a8c8-b11fd9d8448f-kube-api-access-qr9mp\") pod \"network-check-target-tfkdr\" (UID: \"687e1330-7999-4eea-a8c8-b11fd9d8448f\") " pod="openshift-network-diagnostics/network-check-target-tfkdr" Apr 16 18:30:50.516678 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:50.516644 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-778f479cf5-qffzb"] Apr 16 18:30:50.526242 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:50.526215 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/cluster-proxy-proxy-agent-58b579794f-dprjc"] Apr 16 18:30:50.528564 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:50.528543 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-747745957c-f74wb"] Apr 16 18:30:50.554766 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:50.554740 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-tfkdr" Apr 16 18:30:50.666506 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:50.666433 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd43a0fd9_7414_4a3a_b051_b1c0acbb4c00.slice/crio-2324e75451c229d92713e9826623d00de6e29975e31158557e4f688ca3fbdb5f WatchSource:0}: Error finding container 2324e75451c229d92713e9826623d00de6e29975e31158557e4f688ca3fbdb5f: Status 404 returned error can't find the container with id 2324e75451c229d92713e9826623d00de6e29975e31158557e4f688ca3fbdb5f Apr 16 18:30:50.667309 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:50.667091 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8c054fa_1f67_4b74_9059_ed94490a803e.slice/crio-36d163f180c779cc2a9cac5e8a9ace671b505c734eaf6f9e262fc103d815c39c WatchSource:0}: Error finding container 36d163f180c779cc2a9cac5e8a9ace671b505c734eaf6f9e262fc103d815c39c: Status 404 returned error can't find the container with id 36d163f180c779cc2a9cac5e8a9ace671b505c734eaf6f9e262fc103d815c39c Apr 16 18:30:50.667749 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:50.667728 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a97773_82b0_4bd2_bc25_bc3330347365.slice/crio-bb43b7930effccf3f958f2ee3236725371b7c8b5d04807ee527aec9d8aa74956 WatchSource:0}: Error finding container bb43b7930effccf3f958f2ee3236725371b7c8b5d04807ee527aec9d8aa74956: Status 404 returned error can't find the container with id bb43b7930effccf3f958f2ee3236725371b7c8b5d04807ee527aec9d8aa74956 Apr 16 18:30:50.807690 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:50.807653 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-58b579794f-dprjc" event={"ID":"e8c054fa-1f67-4b74-9059-ed94490a803e","Type":"ContainerStarted","Data":"36d163f180c779cc2a9cac5e8a9ace671b505c734eaf6f9e262fc103d815c39c"} Apr 16 18:30:50.809269 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:50.809243 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-tfkdr"] Apr 16 18:30:50.809423 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:50.809281 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-778f479cf5-qffzb" event={"ID":"d43a0fd9-7414-4a3a-b051-b1c0acbb4c00","Type":"ContainerStarted","Data":"2324e75451c229d92713e9826623d00de6e29975e31158557e4f688ca3fbdb5f"} Apr 16 18:30:50.810537 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:50.810511 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-747745957c-f74wb" event={"ID":"37a97773-82b0-4bd2-bc25-bc3330347365","Type":"ContainerStarted","Data":"bb43b7930effccf3f958f2ee3236725371b7c8b5d04807ee527aec9d8aa74956"} Apr 16 18:30:50.813455 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:50.813431 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod687e1330_7999_4eea_a8c8_b11fd9d8448f.slice/crio-be585314ea603ef7667d9f0ccc298e0572be793828ede418e65b7a6ee758eb86 WatchSource:0}: Error finding container be585314ea603ef7667d9f0ccc298e0572be793828ede418e65b7a6ee758eb86: Status 404 returned error can't find the container with id be585314ea603ef7667d9f0ccc298e0572be793828ede418e65b7a6ee758eb86 Apr 16 18:30:51.180537 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:51.180329 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-registry-tls\") pod \"image-registry-68699947db-2vcnw\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:30:51.180741 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:51.180553 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9d04db45-c40a-4deb-a86e-03e77a3b560e-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-p8jnc\" (UID: \"9d04db45-c40a-4deb-a86e-03e77a3b560e\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-p8jnc" Apr 16 18:30:51.180741 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:51.180583 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8db23076-0658-4e7c-aab7-30f06e2174dc-cert\") pod \"ingress-canary-z87hx\" (UID: \"8db23076-0658-4e7c-aab7-30f06e2174dc\") " pod="openshift-ingress-canary/ingress-canary-z87hx" Apr 16 18:30:51.180741 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:51.180469 2569 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 18:30:51.180741 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:51.180615 2569 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-68699947db-2vcnw: secret "image-registry-tls" not found Apr 16 18:30:51.180741 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:51.180671 2569 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 18:30:51.180741 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:51.180698 2569 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 16 18:30:51.180741 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:51.180677 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-registry-tls podName:e1716530-3a79-4ef5-bd3c-0909772664d6 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:53.180661925 +0000 UTC m=+37.158745360 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-registry-tls") pod "image-registry-68699947db-2vcnw" (UID: "e1716530-3a79-4ef5-bd3c-0909772664d6") : secret "image-registry-tls" not found Apr 16 18:30:51.180985 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:51.180759 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8db23076-0658-4e7c-aab7-30f06e2174dc-cert podName:8db23076-0658-4e7c-aab7-30f06e2174dc nodeName:}" failed. No retries permitted until 2026-04-16 18:30:53.180742928 +0000 UTC m=+37.158826355 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8db23076-0658-4e7c-aab7-30f06e2174dc-cert") pod "ingress-canary-z87hx" (UID: "8db23076-0658-4e7c-aab7-30f06e2174dc") : secret "canary-serving-cert" not found Apr 16 18:30:51.180985 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:51.180779 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d04db45-c40a-4deb-a86e-03e77a3b560e-networking-console-plugin-cert podName:9d04db45-c40a-4deb-a86e-03e77a3b560e nodeName:}" failed. No retries permitted until 2026-04-16 18:30:53.180767593 +0000 UTC m=+37.158851017 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/9d04db45-c40a-4deb-a86e-03e77a3b560e-networking-console-plugin-cert") pod "networking-console-plugin-5cb6cf4cb4-p8jnc" (UID: "9d04db45-c40a-4deb-a86e-03e77a3b560e") : secret "networking-console-plugin-cert" not found Apr 16 18:30:51.281752 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:51.281669 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4561dc6f-93f8-48ae-a46a-8ae75f78fdb1-metrics-tls\") pod \"dns-default-wq9r5\" (UID: \"4561dc6f-93f8-48ae-a46a-8ae75f78fdb1\") " pod="openshift-dns/dns-default-wq9r5" Apr 16 18:30:51.281901 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:51.281830 2569 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 18:30:51.281961 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:51.281909 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4561dc6f-93f8-48ae-a46a-8ae75f78fdb1-metrics-tls podName:4561dc6f-93f8-48ae-a46a-8ae75f78fdb1 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:53.281889724 +0000 UTC m=+37.259973159 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4561dc6f-93f8-48ae-a46a-8ae75f78fdb1-metrics-tls") pod "dns-default-wq9r5" (UID: "4561dc6f-93f8-48ae-a46a-8ae75f78fdb1") : secret "dns-default-metrics-tls" not found Apr 16 18:30:51.823968 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:51.823894 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-tfkdr" event={"ID":"687e1330-7999-4eea-a8c8-b11fd9d8448f","Type":"ContainerStarted","Data":"be585314ea603ef7667d9f0ccc298e0572be793828ede418e65b7a6ee758eb86"} Apr 16 18:30:51.831710 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:51.831623 2569 generic.go:358] "Generic (PLEG): container finished" podID="c9036c3c-a41d-405f-acbf-c30968863203" containerID="a27617a685b61b9f87ad78018cd3a2d4f416f2d4944008e324420c870c06b96f" exitCode=0 Apr 16 18:30:51.831710 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:51.831668 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d2jts" event={"ID":"c9036c3c-a41d-405f-acbf-c30968863203","Type":"ContainerDied","Data":"a27617a685b61b9f87ad78018cd3a2d4f416f2d4944008e324420c870c06b96f"} Apr 16 18:30:52.399623 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:52.399458 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/fd1a63ff-830c-4979-9f9d-bd6268584fbf-original-pull-secret\") pod \"global-pull-secret-syncer-66tjb\" (UID: \"fd1a63ff-830c-4979-9f9d-bd6268584fbf\") " pod="kube-system/global-pull-secret-syncer-66tjb" Apr 16 18:30:52.408692 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:52.408631 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/fd1a63ff-830c-4979-9f9d-bd6268584fbf-original-pull-secret\") pod \"global-pull-secret-syncer-66tjb\" (UID: \"fd1a63ff-830c-4979-9f9d-bd6268584fbf\") " pod="kube-system/global-pull-secret-syncer-66tjb" Apr 16 18:30:52.693723 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:52.693259 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-66tjb" Apr 16 18:30:52.843476 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:52.842537 2569 generic.go:358] "Generic (PLEG): container finished" podID="c9036c3c-a41d-405f-acbf-c30968863203" containerID="5732e73e49634655f6845fb7817806cddab1d7a486d65452671b415e249fc4c8" exitCode=0 Apr 16 18:30:52.843476 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:52.842598 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d2jts" event={"ID":"c9036c3c-a41d-405f-acbf-c30968863203","Type":"ContainerDied","Data":"5732e73e49634655f6845fb7817806cddab1d7a486d65452671b415e249fc4c8"} Apr 16 18:30:52.864412 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:52.864379 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-66tjb"] Apr 16 18:30:52.872381 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:30:52.872347 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd1a63ff_830c_4979_9f9d_bd6268584fbf.slice/crio-02b2dc982243ba7095295ebe62789d858c3bdf4e59696ebde80ff10af097c52e WatchSource:0}: Error finding container 02b2dc982243ba7095295ebe62789d858c3bdf4e59696ebde80ff10af097c52e: Status 404 returned error can't find the container with id 02b2dc982243ba7095295ebe62789d858c3bdf4e59696ebde80ff10af097c52e Apr 16 18:30:53.215702 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:53.215414 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-registry-tls\") pod \"image-registry-68699947db-2vcnw\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:30:53.215867 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:53.215731 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9d04db45-c40a-4deb-a86e-03e77a3b560e-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-p8jnc\" (UID: \"9d04db45-c40a-4deb-a86e-03e77a3b560e\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-p8jnc" Apr 16 18:30:53.215867 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:53.215779 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8db23076-0658-4e7c-aab7-30f06e2174dc-cert\") pod \"ingress-canary-z87hx\" (UID: \"8db23076-0658-4e7c-aab7-30f06e2174dc\") " pod="openshift-ingress-canary/ingress-canary-z87hx" Apr 16 18:30:53.215984 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:53.215973 2569 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 18:30:53.216034 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:53.216028 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8db23076-0658-4e7c-aab7-30f06e2174dc-cert podName:8db23076-0658-4e7c-aab7-30f06e2174dc nodeName:}" failed. No retries permitted until 2026-04-16 18:30:57.216010317 +0000 UTC m=+41.194093740 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8db23076-0658-4e7c-aab7-30f06e2174dc-cert") pod "ingress-canary-z87hx" (UID: "8db23076-0658-4e7c-aab7-30f06e2174dc") : secret "canary-serving-cert" not found Apr 16 18:30:53.216198 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:53.216179 2569 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 18:30:53.216281 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:53.216202 2569 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-68699947db-2vcnw: secret "image-registry-tls" not found Apr 16 18:30:53.216281 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:53.216246 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-registry-tls podName:e1716530-3a79-4ef5-bd3c-0909772664d6 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:57.2162308 +0000 UTC m=+41.194314238 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-registry-tls") pod "image-registry-68699947db-2vcnw" (UID: "e1716530-3a79-4ef5-bd3c-0909772664d6") : secret "image-registry-tls" not found Apr 16 18:30:53.216395 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:53.216321 2569 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 16 18:30:53.216395 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:53.216379 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d04db45-c40a-4deb-a86e-03e77a3b560e-networking-console-plugin-cert podName:9d04db45-c40a-4deb-a86e-03e77a3b560e nodeName:}" failed. No retries permitted until 2026-04-16 18:30:57.216365707 +0000 UTC m=+41.194449136 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/9d04db45-c40a-4deb-a86e-03e77a3b560e-networking-console-plugin-cert") pod "networking-console-plugin-5cb6cf4cb4-p8jnc" (UID: "9d04db45-c40a-4deb-a86e-03e77a3b560e") : secret "networking-console-plugin-cert" not found Apr 16 18:30:53.317053 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:53.317013 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4561dc6f-93f8-48ae-a46a-8ae75f78fdb1-metrics-tls\") pod \"dns-default-wq9r5\" (UID: \"4561dc6f-93f8-48ae-a46a-8ae75f78fdb1\") " pod="openshift-dns/dns-default-wq9r5" Apr 16 18:30:53.317231 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:53.317219 2569 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 18:30:53.317297 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:53.317283 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4561dc6f-93f8-48ae-a46a-8ae75f78fdb1-metrics-tls podName:4561dc6f-93f8-48ae-a46a-8ae75f78fdb1 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:57.31726472 +0000 UTC m=+41.295348146 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4561dc6f-93f8-48ae-a46a-8ae75f78fdb1-metrics-tls") pod "dns-default-wq9r5" (UID: "4561dc6f-93f8-48ae-a46a-8ae75f78fdb1") : secret "dns-default-metrics-tls" not found Apr 16 18:30:53.847580 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:53.847516 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-66tjb" event={"ID":"fd1a63ff-830c-4979-9f9d-bd6268584fbf","Type":"ContainerStarted","Data":"02b2dc982243ba7095295ebe62789d858c3bdf4e59696ebde80ff10af097c52e"} Apr 16 18:30:53.853877 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:53.853843 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d2jts" event={"ID":"c9036c3c-a41d-405f-acbf-c30968863203","Type":"ContainerStarted","Data":"451464a53f9771450dd782011fedafd62f9fabc3c6687cb5578fdcbabf5c5a23"} Apr 16 18:30:53.876168 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:53.875651 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-d2jts" podStartSLOduration=6.322920696 podStartE2EDuration="37.875631316s" podCreationTimestamp="2026-04-16 18:30:16 +0000 UTC" firstStartedPulling="2026-04-16 18:30:19.15630881 +0000 UTC m=+3.134392247" lastFinishedPulling="2026-04-16 18:30:50.709019433 +0000 UTC m=+34.687102867" observedRunningTime="2026-04-16 18:30:53.874126476 +0000 UTC m=+37.852209923" watchObservedRunningTime="2026-04-16 18:30:53.875631316 +0000 UTC m=+37.853714762" Apr 16 18:30:57.249123 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:57.249088 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-registry-tls\") pod \"image-registry-68699947db-2vcnw\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:30:57.249639 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:57.249140 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9d04db45-c40a-4deb-a86e-03e77a3b560e-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-p8jnc\" (UID: \"9d04db45-c40a-4deb-a86e-03e77a3b560e\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-p8jnc" Apr 16 18:30:57.249639 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:57.249182 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8db23076-0658-4e7c-aab7-30f06e2174dc-cert\") pod \"ingress-canary-z87hx\" (UID: \"8db23076-0658-4e7c-aab7-30f06e2174dc\") " pod="openshift-ingress-canary/ingress-canary-z87hx" Apr 16 18:30:57.249639 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:57.249250 2569 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 18:30:57.249639 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:57.249269 2569 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-68699947db-2vcnw: secret "image-registry-tls" not found Apr 16 18:30:57.249639 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:57.249317 2569 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 16 18:30:57.249639 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:57.249328 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-registry-tls podName:e1716530-3a79-4ef5-bd3c-0909772664d6 nodeName:}" failed. No retries permitted until 2026-04-16 18:31:05.249307308 +0000 UTC m=+49.227390735 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-registry-tls") pod "image-registry-68699947db-2vcnw" (UID: "e1716530-3a79-4ef5-bd3c-0909772664d6") : secret "image-registry-tls" not found Apr 16 18:30:57.249639 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:57.249395 2569 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 18:30:57.249639 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:57.249411 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d04db45-c40a-4deb-a86e-03e77a3b560e-networking-console-plugin-cert podName:9d04db45-c40a-4deb-a86e-03e77a3b560e nodeName:}" failed. No retries permitted until 2026-04-16 18:31:05.249392368 +0000 UTC m=+49.227475798 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/9d04db45-c40a-4deb-a86e-03e77a3b560e-networking-console-plugin-cert") pod "networking-console-plugin-5cb6cf4cb4-p8jnc" (UID: "9d04db45-c40a-4deb-a86e-03e77a3b560e") : secret "networking-console-plugin-cert" not found Apr 16 18:30:57.249639 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:57.249442 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8db23076-0658-4e7c-aab7-30f06e2174dc-cert podName:8db23076-0658-4e7c-aab7-30f06e2174dc nodeName:}" failed. No retries permitted until 2026-04-16 18:31:05.249431202 +0000 UTC m=+49.227514624 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8db23076-0658-4e7c-aab7-30f06e2174dc-cert") pod "ingress-canary-z87hx" (UID: "8db23076-0658-4e7c-aab7-30f06e2174dc") : secret "canary-serving-cert" not found Apr 16 18:30:57.350549 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:30:57.350511 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4561dc6f-93f8-48ae-a46a-8ae75f78fdb1-metrics-tls\") pod \"dns-default-wq9r5\" (UID: \"4561dc6f-93f8-48ae-a46a-8ae75f78fdb1\") " pod="openshift-dns/dns-default-wq9r5" Apr 16 18:30:57.350752 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:57.350674 2569 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 18:30:57.350818 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:30:57.350755 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4561dc6f-93f8-48ae-a46a-8ae75f78fdb1-metrics-tls podName:4561dc6f-93f8-48ae-a46a-8ae75f78fdb1 nodeName:}" failed. No retries permitted until 2026-04-16 18:31:05.350733384 +0000 UTC m=+49.328816822 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4561dc6f-93f8-48ae-a46a-8ae75f78fdb1-metrics-tls") pod "dns-default-wq9r5" (UID: "4561dc6f-93f8-48ae-a46a-8ae75f78fdb1") : secret "dns-default-metrics-tls" not found Apr 16 18:31:00.873495 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:31:00.873399 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-747745957c-f74wb" event={"ID":"37a97773-82b0-4bd2-bc25-bc3330347365","Type":"ContainerStarted","Data":"14399148122c1c441d7b425d6cd34c3a894ffadc87471c7f6d9cc7e9f311c7da"} Apr 16 18:31:00.873966 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:31:00.873594 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-747745957c-f74wb" Apr 16 18:31:00.874974 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:31:00.874940 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-tfkdr" event={"ID":"687e1330-7999-4eea-a8c8-b11fd9d8448f","Type":"ContainerStarted","Data":"fb450d225920a114eca2b76d60708cda80c9650bf5461028e30f5d4bf4c27d27"} Apr 16 18:31:00.875109 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:31:00.875054 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-tfkdr" Apr 16 18:31:00.875614 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:31:00.875594 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-747745957c-f74wb" Apr 16 18:31:00.876241 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:31:00.876221 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-778f479cf5-qffzb" event={"ID":"d43a0fd9-7414-4a3a-b051-b1c0acbb4c00","Type":"ContainerStarted","Data":"0492ae30f4189cdd5d094bcda260f956982535d7aa7256d8343dcef07b5246e1"} Apr 16 18:31:00.877460 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:31:00.877441 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-58b579794f-dprjc" event={"ID":"e8c054fa-1f67-4b74-9059-ed94490a803e","Type":"ContainerStarted","Data":"5364ddefbc4120811de013a9efefc8602ffad7d2883ca3236dd70602502e880f"} Apr 16 18:31:00.878528 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:31:00.878512 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-66tjb" event={"ID":"fd1a63ff-830c-4979-9f9d-bd6268584fbf","Type":"ContainerStarted","Data":"0bb830c1a260aff6d99d4adbbaf2ab1d88b3c930164585868e69fdfed9161a7d"} Apr 16 18:31:00.889411 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:31:00.889370 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-747745957c-f74wb" podStartSLOduration=23.341379057 podStartE2EDuration="32.889358958s" podCreationTimestamp="2026-04-16 18:30:28 +0000 UTC" firstStartedPulling="2026-04-16 18:30:50.685210747 +0000 UTC m=+34.663294184" lastFinishedPulling="2026-04-16 18:31:00.233190658 +0000 UTC m=+44.211274085" observedRunningTime="2026-04-16 18:31:00.888742321 +0000 UTC m=+44.866825778" watchObservedRunningTime="2026-04-16 18:31:00.889358958 +0000 UTC m=+44.867442396" Apr 16 18:31:00.907211 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:31:00.907172 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-target-tfkdr" podStartSLOduration=35.48869701 podStartE2EDuration="44.907163501s" podCreationTimestamp="2026-04-16 18:30:16 +0000 UTC" firstStartedPulling="2026-04-16 18:30:50.815159805 +0000 UTC m=+34.793243231" lastFinishedPulling="2026-04-16 18:31:00.233626283 +0000 UTC m=+44.211709722" observedRunningTime="2026-04-16 18:31:00.906470679 +0000 UTC m=+44.884554125" watchObservedRunningTime="2026-04-16 18:31:00.907163501 +0000 UTC m=+44.885246945" Apr 16 18:31:00.922092 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:31:00.922053 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/global-pull-secret-syncer-66tjb" podStartSLOduration=33.273089012 podStartE2EDuration="40.92204308s" podCreationTimestamp="2026-04-16 18:30:20 +0000 UTC" firstStartedPulling="2026-04-16 18:30:52.8772441 +0000 UTC m=+36.855327527" lastFinishedPulling="2026-04-16 18:31:00.526198168 +0000 UTC m=+44.504281595" observedRunningTime="2026-04-16 18:31:00.921703005 +0000 UTC m=+44.899786461" watchObservedRunningTime="2026-04-16 18:31:00.92204308 +0000 UTC m=+44.900126548" Apr 16 18:31:00.938425 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:31:00.938362 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-778f479cf5-qffzb" podStartSLOduration=23.390323319 podStartE2EDuration="32.938329728s" podCreationTimestamp="2026-04-16 18:30:28 +0000 UTC" firstStartedPulling="2026-04-16 18:30:50.685314956 +0000 UTC m=+34.663398383" lastFinishedPulling="2026-04-16 18:31:00.233321365 +0000 UTC m=+44.211404792" observedRunningTime="2026-04-16 18:31:00.937883758 +0000 UTC m=+44.915967205" watchObservedRunningTime="2026-04-16 18:31:00.938329728 +0000 UTC m=+44.916413173" Apr 16 18:31:03.887257 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:31:03.887223 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-58b579794f-dprjc" event={"ID":"e8c054fa-1f67-4b74-9059-ed94490a803e","Type":"ContainerStarted","Data":"0844209e68ea604a570dc0be4d58f0ce3acef272b08cd6c8323a07e6add0a9c9"} Apr 16 18:31:03.887257 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:31:03.887256 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-58b579794f-dprjc" event={"ID":"e8c054fa-1f67-4b74-9059-ed94490a803e","Type":"ContainerStarted","Data":"2bca57caa612dd5619035d54be6d23d1e6dc768155384b9e01f9e81009ee64a7"} Apr 16 18:31:03.905043 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:31:03.904997 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-58b579794f-dprjc" podStartSLOduration=23.806821726 podStartE2EDuration="35.904983546s" podCreationTimestamp="2026-04-16 18:30:28 +0000 UTC" firstStartedPulling="2026-04-16 18:30:50.68521025 +0000 UTC m=+34.663293681" lastFinishedPulling="2026-04-16 18:31:02.783372075 +0000 UTC m=+46.761455501" observedRunningTime="2026-04-16 18:31:03.904604514 +0000 UTC m=+47.882687961" watchObservedRunningTime="2026-04-16 18:31:03.904983546 +0000 UTC m=+47.883066990" Apr 16 18:31:05.310625 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:31:05.310586 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-registry-tls\") pod \"image-registry-68699947db-2vcnw\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:31:05.311090 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:31:05.310631 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9d04db45-c40a-4deb-a86e-03e77a3b560e-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-p8jnc\" (UID: \"9d04db45-c40a-4deb-a86e-03e77a3b560e\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-p8jnc" Apr 16 18:31:05.311090 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:31:05.310667 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8db23076-0658-4e7c-aab7-30f06e2174dc-cert\") pod \"ingress-canary-z87hx\" (UID: \"8db23076-0658-4e7c-aab7-30f06e2174dc\") " pod="openshift-ingress-canary/ingress-canary-z87hx" Apr 16 18:31:05.311090 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:31:05.310746 2569 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 18:31:05.311090 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:31:05.310758 2569 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 16 18:31:05.311090 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:31:05.310841 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d04db45-c40a-4deb-a86e-03e77a3b560e-networking-console-plugin-cert podName:9d04db45-c40a-4deb-a86e-03e77a3b560e nodeName:}" failed. No retries permitted until 2026-04-16 18:31:21.310820566 +0000 UTC m=+65.288903989 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/9d04db45-c40a-4deb-a86e-03e77a3b560e-networking-console-plugin-cert") pod "networking-console-plugin-5cb6cf4cb4-p8jnc" (UID: "9d04db45-c40a-4deb-a86e-03e77a3b560e") : secret "networking-console-plugin-cert" not found Apr 16 18:31:05.311090 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:31:05.310768 2569 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-68699947db-2vcnw: secret "image-registry-tls" not found Apr 16 18:31:05.311090 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:31:05.310897 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-registry-tls podName:e1716530-3a79-4ef5-bd3c-0909772664d6 nodeName:}" failed. No retries permitted until 2026-04-16 18:31:21.310885133 +0000 UTC m=+65.288968555 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-registry-tls") pod "image-registry-68699947db-2vcnw" (UID: "e1716530-3a79-4ef5-bd3c-0909772664d6") : secret "image-registry-tls" not found Apr 16 18:31:05.311090 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:31:05.310751 2569 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 18:31:05.311090 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:31:05.310926 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8db23076-0658-4e7c-aab7-30f06e2174dc-cert podName:8db23076-0658-4e7c-aab7-30f06e2174dc nodeName:}" failed. No retries permitted until 2026-04-16 18:31:21.310919348 +0000 UTC m=+65.289002771 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8db23076-0658-4e7c-aab7-30f06e2174dc-cert") pod "ingress-canary-z87hx" (UID: "8db23076-0658-4e7c-aab7-30f06e2174dc") : secret "canary-serving-cert" not found Apr 16 18:31:05.412000 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:31:05.411965 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4561dc6f-93f8-48ae-a46a-8ae75f78fdb1-metrics-tls\") pod \"dns-default-wq9r5\" (UID: \"4561dc6f-93f8-48ae-a46a-8ae75f78fdb1\") " pod="openshift-dns/dns-default-wq9r5" Apr 16 18:31:05.412161 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:31:05.412099 2569 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 18:31:05.412208 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:31:05.412163 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4561dc6f-93f8-48ae-a46a-8ae75f78fdb1-metrics-tls podName:4561dc6f-93f8-48ae-a46a-8ae75f78fdb1 nodeName:}" failed. No retries permitted until 2026-04-16 18:31:21.412148353 +0000 UTC m=+65.390231776 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4561dc6f-93f8-48ae-a46a-8ae75f78fdb1-metrics-tls") pod "dns-default-wq9r5" (UID: "4561dc6f-93f8-48ae-a46a-8ae75f78fdb1") : secret "dns-default-metrics-tls" not found Apr 16 18:31:15.807588 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:31:15.807560 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-s62vp" Apr 16 18:31:21.332589 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:31:21.332539 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-registry-tls\") pod \"image-registry-68699947db-2vcnw\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:31:21.332589 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:31:21.332595 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9d04db45-c40a-4deb-a86e-03e77a3b560e-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-p8jnc\" (UID: \"9d04db45-c40a-4deb-a86e-03e77a3b560e\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-p8jnc" Apr 16 18:31:21.333055 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:31:21.332633 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8db23076-0658-4e7c-aab7-30f06e2174dc-cert\") pod \"ingress-canary-z87hx\" (UID: \"8db23076-0658-4e7c-aab7-30f06e2174dc\") " pod="openshift-ingress-canary/ingress-canary-z87hx" Apr 16 18:31:21.333055 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:31:21.332691 2569 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 18:31:21.333055 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:31:21.332715 2569 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-68699947db-2vcnw: secret "image-registry-tls" not found Apr 16 18:31:21.333055 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:31:21.332749 2569 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 16 18:31:21.333055 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:31:21.332755 2569 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 18:31:21.333055 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:31:21.332770 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-registry-tls podName:e1716530-3a79-4ef5-bd3c-0909772664d6 nodeName:}" failed. No retries permitted until 2026-04-16 18:31:53.332753845 +0000 UTC m=+97.310837268 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-registry-tls") pod "image-registry-68699947db-2vcnw" (UID: "e1716530-3a79-4ef5-bd3c-0909772664d6") : secret "image-registry-tls" not found Apr 16 18:31:21.333055 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:31:21.332796 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d04db45-c40a-4deb-a86e-03e77a3b560e-networking-console-plugin-cert podName:9d04db45-c40a-4deb-a86e-03e77a3b560e nodeName:}" failed. No retries permitted until 2026-04-16 18:31:53.332783134 +0000 UTC m=+97.310866583 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/9d04db45-c40a-4deb-a86e-03e77a3b560e-networking-console-plugin-cert") pod "networking-console-plugin-5cb6cf4cb4-p8jnc" (UID: "9d04db45-c40a-4deb-a86e-03e77a3b560e") : secret "networking-console-plugin-cert" not found Apr 16 18:31:21.333055 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:31:21.332809 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8db23076-0658-4e7c-aab7-30f06e2174dc-cert podName:8db23076-0658-4e7c-aab7-30f06e2174dc nodeName:}" failed. No retries permitted until 2026-04-16 18:31:53.332803378 +0000 UTC m=+97.310886800 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8db23076-0658-4e7c-aab7-30f06e2174dc-cert") pod "ingress-canary-z87hx" (UID: "8db23076-0658-4e7c-aab7-30f06e2174dc") : secret "canary-serving-cert" not found Apr 16 18:31:21.433455 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:31:21.433423 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4561dc6f-93f8-48ae-a46a-8ae75f78fdb1-metrics-tls\") pod \"dns-default-wq9r5\" (UID: \"4561dc6f-93f8-48ae-a46a-8ae75f78fdb1\") " pod="openshift-dns/dns-default-wq9r5" Apr 16 18:31:21.433586 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:31:21.433531 2569 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 18:31:21.433630 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:31:21.433601 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4561dc6f-93f8-48ae-a46a-8ae75f78fdb1-metrics-tls podName:4561dc6f-93f8-48ae-a46a-8ae75f78fdb1 nodeName:}" failed. No retries permitted until 2026-04-16 18:31:53.433585843 +0000 UTC m=+97.411669267 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4561dc6f-93f8-48ae-a46a-8ae75f78fdb1-metrics-tls") pod "dns-default-wq9r5" (UID: "4561dc6f-93f8-48ae-a46a-8ae75f78fdb1") : secret "dns-default-metrics-tls" not found Apr 16 18:31:22.339850 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:31:22.339806 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dc3c5cbb-7bc5-4228-88bf-021a899d1e57-metrics-certs\") pod \"network-metrics-daemon-kk4tm\" (UID: \"dc3c5cbb-7bc5-4228-88bf-021a899d1e57\") " pod="openshift-multus/network-metrics-daemon-kk4tm" Apr 16 18:31:22.340203 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:31:22.339949 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Apr 16 18:31:22.340203 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:31:22.340013 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc3c5cbb-7bc5-4228-88bf-021a899d1e57-metrics-certs podName:dc3c5cbb-7bc5-4228-88bf-021a899d1e57 nodeName:}" failed. No retries permitted until 2026-04-16 18:32:26.339998217 +0000 UTC m=+130.318081640 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dc3c5cbb-7bc5-4228-88bf-021a899d1e57-metrics-certs") pod "network-metrics-daemon-kk4tm" (UID: "dc3c5cbb-7bc5-4228-88bf-021a899d1e57") : secret "metrics-daemon-secret" not found Apr 16 18:31:31.883072 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:31:31.883043 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-tfkdr" Apr 16 18:31:53.372087 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:31:53.372038 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-registry-tls\") pod \"image-registry-68699947db-2vcnw\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:31:53.372087 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:31:53.372094 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9d04db45-c40a-4deb-a86e-03e77a3b560e-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-p8jnc\" (UID: \"9d04db45-c40a-4deb-a86e-03e77a3b560e\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-p8jnc" Apr 16 18:31:53.372610 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:31:53.372122 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8db23076-0658-4e7c-aab7-30f06e2174dc-cert\") pod \"ingress-canary-z87hx\" (UID: \"8db23076-0658-4e7c-aab7-30f06e2174dc\") " pod="openshift-ingress-canary/ingress-canary-z87hx" Apr 16 18:31:53.372610 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:31:53.372188 2569 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 18:31:53.372610 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:31:53.372210 2569 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-68699947db-2vcnw: secret "image-registry-tls" not found Apr 16 18:31:53.372610 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:31:53.372224 2569 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 18:31:53.372610 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:31:53.372265 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-registry-tls podName:e1716530-3a79-4ef5-bd3c-0909772664d6 nodeName:}" failed. No retries permitted until 2026-04-16 18:32:57.372249735 +0000 UTC m=+161.350333157 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-registry-tls") pod "image-registry-68699947db-2vcnw" (UID: "e1716530-3a79-4ef5-bd3c-0909772664d6") : secret "image-registry-tls" not found Apr 16 18:31:53.372610 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:31:53.372270 2569 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 16 18:31:53.372610 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:31:53.372279 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8db23076-0658-4e7c-aab7-30f06e2174dc-cert podName:8db23076-0658-4e7c-aab7-30f06e2174dc nodeName:}" failed. No retries permitted until 2026-04-16 18:32:57.372273042 +0000 UTC m=+161.350356465 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8db23076-0658-4e7c-aab7-30f06e2174dc-cert") pod "ingress-canary-z87hx" (UID: "8db23076-0658-4e7c-aab7-30f06e2174dc") : secret "canary-serving-cert" not found Apr 16 18:31:53.372610 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:31:53.372371 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d04db45-c40a-4deb-a86e-03e77a3b560e-networking-console-plugin-cert podName:9d04db45-c40a-4deb-a86e-03e77a3b560e nodeName:}" failed. No retries permitted until 2026-04-16 18:32:57.372331222 +0000 UTC m=+161.350414644 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/9d04db45-c40a-4deb-a86e-03e77a3b560e-networking-console-plugin-cert") pod "networking-console-plugin-5cb6cf4cb4-p8jnc" (UID: "9d04db45-c40a-4deb-a86e-03e77a3b560e") : secret "networking-console-plugin-cert" not found Apr 16 18:31:53.473317 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:31:53.473284 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4561dc6f-93f8-48ae-a46a-8ae75f78fdb1-metrics-tls\") pod \"dns-default-wq9r5\" (UID: \"4561dc6f-93f8-48ae-a46a-8ae75f78fdb1\") " pod="openshift-dns/dns-default-wq9r5" Apr 16 18:31:53.473494 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:31:53.473444 2569 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 18:31:53.473538 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:31:53.473508 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4561dc6f-93f8-48ae-a46a-8ae75f78fdb1-metrics-tls podName:4561dc6f-93f8-48ae-a46a-8ae75f78fdb1 nodeName:}" failed. No retries permitted until 2026-04-16 18:32:57.473491714 +0000 UTC m=+161.451575140 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4561dc6f-93f8-48ae-a46a-8ae75f78fdb1-metrics-tls") pod "dns-default-wq9r5" (UID: "4561dc6f-93f8-48ae-a46a-8ae75f78fdb1") : secret "dns-default-metrics-tls" not found Apr 16 18:32:26.425138 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:26.425102 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dc3c5cbb-7bc5-4228-88bf-021a899d1e57-metrics-certs\") pod \"network-metrics-daemon-kk4tm\" (UID: \"dc3c5cbb-7bc5-4228-88bf-021a899d1e57\") " pod="openshift-multus/network-metrics-daemon-kk4tm" Apr 16 18:32:26.425771 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:32:26.425245 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Apr 16 18:32:26.425771 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:32:26.425347 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc3c5cbb-7bc5-4228-88bf-021a899d1e57-metrics-certs podName:dc3c5cbb-7bc5-4228-88bf-021a899d1e57 nodeName:}" failed. No retries permitted until 2026-04-16 18:34:28.425309708 +0000 UTC m=+252.403393151 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dc3c5cbb-7bc5-4228-88bf-021a899d1e57-metrics-certs") pod "network-metrics-daemon-kk4tm" (UID: "dc3c5cbb-7bc5-4228-88bf-021a899d1e57") : secret "metrics-daemon-secret" not found Apr 16 18:32:38.839045 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:38.839018 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-8h69z_8b1f3fed-8fbc-4087-a06e-b4bb1396ba36/dns-node-resolver/0.log" Apr 16 18:32:40.052893 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:40.052862 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-rfstz_a4e163bd-89bf-4b55-9d51-38032e333eb1/node-ca/0.log" Apr 16 18:32:52.404434 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:32:52.404377 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[networking-console-plugin-cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-p8jnc" podUID="9d04db45-c40a-4deb-a86e-03e77a3b560e" Apr 16 18:32:52.419088 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:32:52.419063 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[registry-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-image-registry/image-registry-68699947db-2vcnw" podUID="e1716530-3a79-4ef5-bd3c-0909772664d6" Apr 16 18:32:52.504669 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:32:52.504628 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-ingress-canary/ingress-canary-z87hx" podUID="8db23076-0658-4e7c-aab7-30f06e2174dc" Apr 16 18:32:52.509789 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:32:52.509764 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[metrics-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-dns/dns-default-wq9r5" podUID="4561dc6f-93f8-48ae-a46a-8ae75f78fdb1" Apr 16 18:32:52.663573 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:32:52.663495 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[metrics-certs], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-multus/network-metrics-daemon-kk4tm" podUID="dc3c5cbb-7bc5-4228-88bf-021a899d1e57" Apr 16 18:32:53.134313 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:53.134279 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:32:53.134499 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:53.134279 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-p8jnc" Apr 16 18:32:57.469384 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:57.468633 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8db23076-0658-4e7c-aab7-30f06e2174dc-cert\") pod \"ingress-canary-z87hx\" (UID: \"8db23076-0658-4e7c-aab7-30f06e2174dc\") " pod="openshift-ingress-canary/ingress-canary-z87hx" Apr 16 18:32:57.469384 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:57.468777 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-registry-tls\") pod \"image-registry-68699947db-2vcnw\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:32:57.469384 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:57.468858 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9d04db45-c40a-4deb-a86e-03e77a3b560e-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-p8jnc\" (UID: \"9d04db45-c40a-4deb-a86e-03e77a3b560e\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-p8jnc" Apr 16 18:32:57.473829 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:57.473793 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8db23076-0658-4e7c-aab7-30f06e2174dc-cert\") pod \"ingress-canary-z87hx\" (UID: \"8db23076-0658-4e7c-aab7-30f06e2174dc\") " pod="openshift-ingress-canary/ingress-canary-z87hx" Apr 16 18:32:57.473952 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:57.473793 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9d04db45-c40a-4deb-a86e-03e77a3b560e-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-p8jnc\" (UID: \"9d04db45-c40a-4deb-a86e-03e77a3b560e\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-p8jnc" Apr 16 18:32:57.473993 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:57.473954 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-registry-tls\") pod \"image-registry-68699947db-2vcnw\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:32:57.569793 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:57.569755 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4561dc6f-93f8-48ae-a46a-8ae75f78fdb1-metrics-tls\") pod \"dns-default-wq9r5\" (UID: \"4561dc6f-93f8-48ae-a46a-8ae75f78fdb1\") " pod="openshift-dns/dns-default-wq9r5" Apr 16 18:32:57.572014 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:57.571985 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4561dc6f-93f8-48ae-a46a-8ae75f78fdb1-metrics-tls\") pod \"dns-default-wq9r5\" (UID: \"4561dc6f-93f8-48ae-a46a-8ae75f78fdb1\") " pod="openshift-dns/dns-default-wq9r5" Apr 16 18:32:57.638647 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:57.638616 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"default-dockercfg-4rwc8\"" Apr 16 18:32:57.639815 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:57.639800 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-r7rs7\"" Apr 16 18:32:57.645393 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:57.645378 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:32:57.645462 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:57.645448 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-p8jnc" Apr 16 18:32:57.769384 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:57.769328 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-5cb6cf4cb4-p8jnc"] Apr 16 18:32:57.772657 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:32:57.772631 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d04db45_c40a_4deb_a86e_03e77a3b560e.slice/crio-d286aaced879013e968c3abdac6ed37e95ba8788e5f8fd2f82f2c4069cb67cfc WatchSource:0}: Error finding container d286aaced879013e968c3abdac6ed37e95ba8788e5f8fd2f82f2c4069cb67cfc: Status 404 returned error can't find the container with id d286aaced879013e968c3abdac6ed37e95ba8788e5f8fd2f82f2c4069cb67cfc Apr 16 18:32:57.787954 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:57.787933 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-68699947db-2vcnw"] Apr 16 18:32:57.790852 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:32:57.790829 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1716530_3a79_4ef5_bd3c_0909772664d6.slice/crio-39a68afd02b0296d9c173d7b5d36754a53ed44e3914282c234aed2c7611bf713 WatchSource:0}: Error finding container 39a68afd02b0296d9c173d7b5d36754a53ed44e3914282c234aed2c7611bf713: Status 404 returned error can't find the container with id 39a68afd02b0296d9c173d7b5d36754a53ed44e3914282c234aed2c7611bf713 Apr 16 18:32:58.147698 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:58.147594 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-68699947db-2vcnw" event={"ID":"e1716530-3a79-4ef5-bd3c-0909772664d6","Type":"ContainerStarted","Data":"7042b5b61cac410973b5d358f75c6e790140638bc71cf65801d3443ae9112c34"} Apr 16 18:32:58.147698 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:58.147641 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-68699947db-2vcnw" event={"ID":"e1716530-3a79-4ef5-bd3c-0909772664d6","Type":"ContainerStarted","Data":"39a68afd02b0296d9c173d7b5d36754a53ed44e3914282c234aed2c7611bf713"} Apr 16 18:32:58.147918 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:58.147709 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:32:58.148572 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:58.148541 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-p8jnc" event={"ID":"9d04db45-c40a-4deb-a86e-03e77a3b560e","Type":"ContainerStarted","Data":"d286aaced879013e968c3abdac6ed37e95ba8788e5f8fd2f82f2c4069cb67cfc"} Apr 16 18:32:58.167134 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:58.167085 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-68699947db-2vcnw" podStartSLOduration=142.16706877 podStartE2EDuration="2m22.16706877s" podCreationTimestamp="2026-04-16 18:30:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:32:58.1665493 +0000 UTC m=+162.144632757" watchObservedRunningTime="2026-04-16 18:32:58.16706877 +0000 UTC m=+162.145152215" Apr 16 18:32:59.152401 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:59.152358 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-p8jnc" event={"ID":"9d04db45-c40a-4deb-a86e-03e77a3b560e","Type":"ContainerStarted","Data":"99556f0ef38d313d40764a927d7e27dc016ad4198c04c29dc5b29db9524c6a53"} Apr 16 18:32:59.169513 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:59.169469 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-p8jnc" podStartSLOduration=161.258767735 podStartE2EDuration="2m42.169454317s" podCreationTimestamp="2026-04-16 18:30:17 +0000 UTC" firstStartedPulling="2026-04-16 18:32:57.774362571 +0000 UTC m=+161.752445994" lastFinishedPulling="2026-04-16 18:32:58.685049138 +0000 UTC m=+162.663132576" observedRunningTime="2026-04-16 18:32:59.168836473 +0000 UTC m=+163.146919919" watchObservedRunningTime="2026-04-16 18:32:59.169454317 +0000 UTC m=+163.147537762" Apr 16 18:32:59.314734 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:59.314704 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-runtime-extractor-j6xwp"] Apr 16 18:32:59.317906 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:59.317886 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-j6xwp" Apr 16 18:32:59.320716 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:59.320691 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"openshift-service-ca.crt\"" Apr 16 18:32:59.320815 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:59.320779 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-sa-dockercfg-gqtpm\"" Apr 16 18:32:59.321554 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:59.321531 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-rbac-proxy\"" Apr 16 18:32:59.321554 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:59.321531 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-tls\"" Apr 16 18:32:59.321762 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:59.321549 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-root-ca.crt\"" Apr 16 18:32:59.332870 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:59.332846 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-j6xwp"] Apr 16 18:32:59.486742 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:59.486706 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-j6xwp\" (UID: \"98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96\") " pod="openshift-insights/insights-runtime-extractor-j6xwp" Apr 16 18:32:59.486932 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:59.486772 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96-data-volume\") pod \"insights-runtime-extractor-j6xwp\" (UID: \"98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96\") " pod="openshift-insights/insights-runtime-extractor-j6xwp" Apr 16 18:32:59.486932 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:59.486804 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr2zr\" (UniqueName: \"kubernetes.io/projected/98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96-kube-api-access-qr2zr\") pod \"insights-runtime-extractor-j6xwp\" (UID: \"98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96\") " pod="openshift-insights/insights-runtime-extractor-j6xwp" Apr 16 18:32:59.486932 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:59.486829 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-j6xwp\" (UID: \"98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96\") " pod="openshift-insights/insights-runtime-extractor-j6xwp" Apr 16 18:32:59.486932 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:59.486848 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96-crio-socket\") pod \"insights-runtime-extractor-j6xwp\" (UID: \"98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96\") " pod="openshift-insights/insights-runtime-extractor-j6xwp" Apr 16 18:32:59.587836 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:59.587800 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-j6xwp\" (UID: \"98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96\") " pod="openshift-insights/insights-runtime-extractor-j6xwp" Apr 16 18:32:59.588035 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:59.587858 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96-data-volume\") pod \"insights-runtime-extractor-j6xwp\" (UID: \"98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96\") " pod="openshift-insights/insights-runtime-extractor-j6xwp" Apr 16 18:32:59.588035 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:59.587879 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qr2zr\" (UniqueName: \"kubernetes.io/projected/98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96-kube-api-access-qr2zr\") pod \"insights-runtime-extractor-j6xwp\" (UID: \"98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96\") " pod="openshift-insights/insights-runtime-extractor-j6xwp" Apr 16 18:32:59.588035 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:59.587900 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-j6xwp\" (UID: \"98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96\") " pod="openshift-insights/insights-runtime-extractor-j6xwp" Apr 16 18:32:59.588035 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:59.587919 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96-crio-socket\") pod \"insights-runtime-extractor-j6xwp\" (UID: \"98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96\") " pod="openshift-insights/insights-runtime-extractor-j6xwp" Apr 16 18:32:59.588267 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:59.588097 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96-crio-socket\") pod \"insights-runtime-extractor-j6xwp\" (UID: \"98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96\") " pod="openshift-insights/insights-runtime-extractor-j6xwp" Apr 16 18:32:59.588319 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:59.588284 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96-data-volume\") pod \"insights-runtime-extractor-j6xwp\" (UID: \"98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96\") " pod="openshift-insights/insights-runtime-extractor-j6xwp" Apr 16 18:32:59.588542 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:59.588524 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-j6xwp\" (UID: \"98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96\") " pod="openshift-insights/insights-runtime-extractor-j6xwp" Apr 16 18:32:59.590226 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:59.590202 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-j6xwp\" (UID: \"98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96\") " pod="openshift-insights/insights-runtime-extractor-j6xwp" Apr 16 18:32:59.599840 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:59.599820 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr2zr\" (UniqueName: \"kubernetes.io/projected/98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96-kube-api-access-qr2zr\") pod \"insights-runtime-extractor-j6xwp\" (UID: \"98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96\") " pod="openshift-insights/insights-runtime-extractor-j6xwp" Apr 16 18:32:59.628031 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:59.628002 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-j6xwp" Apr 16 18:32:59.742924 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:32:59.742853 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-j6xwp"] Apr 16 18:32:59.745777 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:32:59.745742 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98ba6330_64cf_4d4c_8ae0_2ddbe0a40d96.slice/crio-21fb3e64bef6910de10ca33e8413aa0c55e88ce69a739afbe0f6028152fc290c WatchSource:0}: Error finding container 21fb3e64bef6910de10ca33e8413aa0c55e88ce69a739afbe0f6028152fc290c: Status 404 returned error can't find the container with id 21fb3e64bef6910de10ca33e8413aa0c55e88ce69a739afbe0f6028152fc290c Apr 16 18:33:00.155660 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:00.155570 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-j6xwp" event={"ID":"98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96","Type":"ContainerStarted","Data":"21bbb59d793dd9239688767cae6f4148ce1a70467a05dcca0ace143014c5a58a"} Apr 16 18:33:00.155660 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:00.155617 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-j6xwp" event={"ID":"98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96","Type":"ContainerStarted","Data":"21fb3e64bef6910de10ca33e8413aa0c55e88ce69a739afbe0f6028152fc290c"} Apr 16 18:33:00.873952 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:00.873883 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-747745957c-f74wb" podUID="37a97773-82b0-4bd2-bc25-bc3330347365" containerName="acm-agent" probeResult="failure" output="Get \"http://10.132.0.9:8000/readyz\": dial tcp 10.132.0.9:8000: connect: connection refused" Apr 16 18:33:01.161047 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:01.160962 2569 generic.go:358] "Generic (PLEG): container finished" podID="d43a0fd9-7414-4a3a-b051-b1c0acbb4c00" containerID="0492ae30f4189cdd5d094bcda260f956982535d7aa7256d8343dcef07b5246e1" exitCode=255 Apr 16 18:33:01.161487 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:01.161037 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-778f479cf5-qffzb" event={"ID":"d43a0fd9-7414-4a3a-b051-b1c0acbb4c00","Type":"ContainerDied","Data":"0492ae30f4189cdd5d094bcda260f956982535d7aa7256d8343dcef07b5246e1"} Apr 16 18:33:01.161487 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:01.161411 2569 scope.go:117] "RemoveContainer" containerID="0492ae30f4189cdd5d094bcda260f956982535d7aa7256d8343dcef07b5246e1" Apr 16 18:33:01.163205 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:01.163181 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-j6xwp" event={"ID":"98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96","Type":"ContainerStarted","Data":"15976847076fef2651552dcabb99b938d6611be0709d17e1bd8327c3d8309e39"} Apr 16 18:33:01.164563 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:01.164542 2569 generic.go:358] "Generic (PLEG): container finished" podID="37a97773-82b0-4bd2-bc25-bc3330347365" containerID="14399148122c1c441d7b425d6cd34c3a894ffadc87471c7f6d9cc7e9f311c7da" exitCode=1 Apr 16 18:33:01.164719 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:01.164605 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-747745957c-f74wb" event={"ID":"37a97773-82b0-4bd2-bc25-bc3330347365","Type":"ContainerDied","Data":"14399148122c1c441d7b425d6cd34c3a894ffadc87471c7f6d9cc7e9f311c7da"} Apr 16 18:33:01.164868 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:01.164853 2569 scope.go:117] "RemoveContainer" containerID="14399148122c1c441d7b425d6cd34c3a894ffadc87471c7f6d9cc7e9f311c7da" Apr 16 18:33:02.168838 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:02.168801 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-778f479cf5-qffzb" event={"ID":"d43a0fd9-7414-4a3a-b051-b1c0acbb4c00","Type":"ContainerStarted","Data":"37e84c03771c80dff3b11bed845794578fc9712aa64360d975f93054cf11d43b"} Apr 16 18:33:02.170514 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:02.170490 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-j6xwp" event={"ID":"98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96","Type":"ContainerStarted","Data":"7d9ba3fa59d69bd31c74f68eccb17b2a16714452fd96bc88e6b0014e1bc16ad8"} Apr 16 18:33:02.171874 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:02.171850 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-747745957c-f74wb" event={"ID":"37a97773-82b0-4bd2-bc25-bc3330347365","Type":"ContainerStarted","Data":"b420f5f08c7e782b02bf7f0142f24a723dcf8dab17ae1b4ee786071b1d536613"} Apr 16 18:33:02.172121 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:02.172099 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-747745957c-f74wb" Apr 16 18:33:02.172641 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:02.172627 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-747745957c-f74wb" Apr 16 18:33:02.213791 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:02.213744 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-runtime-extractor-j6xwp" podStartSLOduration=1.376829283 podStartE2EDuration="3.213727369s" podCreationTimestamp="2026-04-16 18:32:59 +0000 UTC" firstStartedPulling="2026-04-16 18:32:59.801843532 +0000 UTC m=+163.779926959" lastFinishedPulling="2026-04-16 18:33:01.638741622 +0000 UTC m=+165.616825045" observedRunningTime="2026-04-16 18:33:02.213024848 +0000 UTC m=+166.191108290" watchObservedRunningTime="2026-04-16 18:33:02.213727369 +0000 UTC m=+166.191810858" Apr 16 18:33:03.644055 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:03.644001 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kk4tm" Apr 16 18:33:03.644496 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:03.644001 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wq9r5" Apr 16 18:33:03.646621 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:03.646602 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-tdfb9\"" Apr 16 18:33:03.654972 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:03.654953 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wq9r5" Apr 16 18:33:03.766470 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:03.766431 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wq9r5"] Apr 16 18:33:03.769054 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:33:03.769029 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4561dc6f_93f8_48ae_a46a_8ae75f78fdb1.slice/crio-bc95a5dfbc1dfe0859b388f551219e4cc7c55929be1b70e351521bb6ec320586 WatchSource:0}: Error finding container bc95a5dfbc1dfe0859b388f551219e4cc7c55929be1b70e351521bb6ec320586: Status 404 returned error can't find the container with id bc95a5dfbc1dfe0859b388f551219e4cc7c55929be1b70e351521bb6ec320586 Apr 16 18:33:04.178706 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:04.178670 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wq9r5" event={"ID":"4561dc6f-93f8-48ae-a46a-8ae75f78fdb1","Type":"ContainerStarted","Data":"bc95a5dfbc1dfe0859b388f551219e4cc7c55929be1b70e351521bb6ec320586"} Apr 16 18:33:05.182831 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:05.182765 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wq9r5" event={"ID":"4561dc6f-93f8-48ae-a46a-8ae75f78fdb1","Type":"ContainerStarted","Data":"8f09fa08cd4c44647e72a09b76d7199e6b5ec85d0784ce5cf53068269fa29c8b"} Apr 16 18:33:05.182831 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:05.182800 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wq9r5" event={"ID":"4561dc6f-93f8-48ae-a46a-8ae75f78fdb1","Type":"ContainerStarted","Data":"20ee557e3fe882754acd81a7b63758b4998ef353889d69a07de70978960c03bc"} Apr 16 18:33:05.183233 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:05.182878 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-wq9r5" Apr 16 18:33:05.197944 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:05.197899 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-wq9r5" podStartSLOduration=135.031927835 podStartE2EDuration="2m16.197887191s" podCreationTimestamp="2026-04-16 18:30:49 +0000 UTC" firstStartedPulling="2026-04-16 18:33:03.77087191 +0000 UTC m=+167.748955337" lastFinishedPulling="2026-04-16 18:33:04.936831266 +0000 UTC m=+168.914914693" observedRunningTime="2026-04-16 18:33:05.197025083 +0000 UTC m=+169.175108556" watchObservedRunningTime="2026-04-16 18:33:05.197887191 +0000 UTC m=+169.175970636" Apr 16 18:33:05.643598 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:05.643517 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-z87hx" Apr 16 18:33:05.645803 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:05.645773 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-6c796\"" Apr 16 18:33:05.654158 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:05.654140 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-z87hx" Apr 16 18:33:05.770866 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:05.770836 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-z87hx"] Apr 16 18:33:05.773568 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:33:05.773541 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8db23076_0658_4e7c_aab7_30f06e2174dc.slice/crio-3f34b3ec94054990a0e1ade6e316d4d00df00633c0f9bf6c988581f9983e62f0 WatchSource:0}: Error finding container 3f34b3ec94054990a0e1ade6e316d4d00df00633c0f9bf6c988581f9983e62f0: Status 404 returned error can't find the container with id 3f34b3ec94054990a0e1ade6e316d4d00df00633c0f9bf6c988581f9983e62f0 Apr 16 18:33:06.186054 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:06.186013 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-z87hx" event={"ID":"8db23076-0658-4e7c-aab7-30f06e2174dc","Type":"ContainerStarted","Data":"3f34b3ec94054990a0e1ade6e316d4d00df00633c0f9bf6c988581f9983e62f0"} Apr 16 18:33:08.192096 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:08.192060 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-z87hx" event={"ID":"8db23076-0658-4e7c-aab7-30f06e2174dc","Type":"ContainerStarted","Data":"686c790dc7f638d6aed1efc9ad9edea2fc3e7f0b21f50ec4ff1ff95ad77f9ee1"} Apr 16 18:33:08.205904 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:08.205848 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-z87hx" podStartSLOduration=137.699673647 podStartE2EDuration="2m19.205833524s" podCreationTimestamp="2026-04-16 18:30:49 +0000 UTC" firstStartedPulling="2026-04-16 18:33:05.775454997 +0000 UTC m=+169.753538420" lastFinishedPulling="2026-04-16 18:33:07.281614867 +0000 UTC m=+171.259698297" observedRunningTime="2026-04-16 18:33:08.205555938 +0000 UTC m=+172.183639384" watchObservedRunningTime="2026-04-16 18:33:08.205833524 +0000 UTC m=+172.183916965" Apr 16 18:33:14.376289 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:14.376253 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-psccd"] Apr 16 18:33:14.379708 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:14.379687 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-psccd" Apr 16 18:33:14.382322 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:14.382295 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-tls\"" Apr 16 18:33:14.382476 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:14.382304 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"openshift-service-ca.crt\"" Apr 16 18:33:14.382606 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:14.382483 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"node-exporter-accelerators-collector-config\"" Apr 16 18:33:14.382782 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:14.382747 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"metrics-client-ca\"" Apr 16 18:33:14.382870 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:14.382784 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-kube-rbac-proxy-config\"" Apr 16 18:33:14.382870 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:14.382794 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-dockercfg-x6np6\"" Apr 16 18:33:14.382870 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:14.382787 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kube-root-ca.crt\"" Apr 16 18:33:14.397409 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:14.397387 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8fd22549-8a71-4c5b-89f5-241942077e63-sys\") pod \"node-exporter-psccd\" (UID: \"8fd22549-8a71-4c5b-89f5-241942077e63\") " pod="openshift-monitoring/node-exporter-psccd" Apr 16 18:33:14.397515 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:14.397427 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8fd22549-8a71-4c5b-89f5-241942077e63-metrics-client-ca\") pod \"node-exporter-psccd\" (UID: \"8fd22549-8a71-4c5b-89f5-241942077e63\") " pod="openshift-monitoring/node-exporter-psccd" Apr 16 18:33:14.397515 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:14.397453 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/8fd22549-8a71-4c5b-89f5-241942077e63-node-exporter-tls\") pod \"node-exporter-psccd\" (UID: \"8fd22549-8a71-4c5b-89f5-241942077e63\") " pod="openshift-monitoring/node-exporter-psccd" Apr 16 18:33:14.397515 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:14.397484 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/8fd22549-8a71-4c5b-89f5-241942077e63-node-exporter-accelerators-collector-config\") pod \"node-exporter-psccd\" (UID: \"8fd22549-8a71-4c5b-89f5-241942077e63\") " pod="openshift-monitoring/node-exporter-psccd" Apr 16 18:33:14.397681 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:14.397612 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8fd22549-8a71-4c5b-89f5-241942077e63-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-psccd\" (UID: \"8fd22549-8a71-4c5b-89f5-241942077e63\") " pod="openshift-monitoring/node-exporter-psccd" Apr 16 18:33:14.397681 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:14.397670 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/8fd22549-8a71-4c5b-89f5-241942077e63-node-exporter-wtmp\") pod \"node-exporter-psccd\" (UID: \"8fd22549-8a71-4c5b-89f5-241942077e63\") " pod="openshift-monitoring/node-exporter-psccd" Apr 16 18:33:14.397779 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:14.397737 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/8fd22549-8a71-4c5b-89f5-241942077e63-root\") pod \"node-exporter-psccd\" (UID: \"8fd22549-8a71-4c5b-89f5-241942077e63\") " pod="openshift-monitoring/node-exporter-psccd" Apr 16 18:33:14.397779 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:14.397766 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/8fd22549-8a71-4c5b-89f5-241942077e63-node-exporter-textfile\") pod \"node-exporter-psccd\" (UID: \"8fd22549-8a71-4c5b-89f5-241942077e63\") " pod="openshift-monitoring/node-exporter-psccd" Apr 16 18:33:14.397881 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:14.397830 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7nn2\" (UniqueName: \"kubernetes.io/projected/8fd22549-8a71-4c5b-89f5-241942077e63-kube-api-access-c7nn2\") pod \"node-exporter-psccd\" (UID: \"8fd22549-8a71-4c5b-89f5-241942077e63\") " pod="openshift-monitoring/node-exporter-psccd" Apr 16 18:33:14.498934 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:14.498893 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8fd22549-8a71-4c5b-89f5-241942077e63-metrics-client-ca\") pod \"node-exporter-psccd\" (UID: \"8fd22549-8a71-4c5b-89f5-241942077e63\") " pod="openshift-monitoring/node-exporter-psccd" Apr 16 18:33:14.498934 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:14.498933 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/8fd22549-8a71-4c5b-89f5-241942077e63-node-exporter-tls\") pod \"node-exporter-psccd\" (UID: \"8fd22549-8a71-4c5b-89f5-241942077e63\") " pod="openshift-monitoring/node-exporter-psccd" Apr 16 18:33:14.499173 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:14.498960 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/8fd22549-8a71-4c5b-89f5-241942077e63-node-exporter-accelerators-collector-config\") pod \"node-exporter-psccd\" (UID: \"8fd22549-8a71-4c5b-89f5-241942077e63\") " pod="openshift-monitoring/node-exporter-psccd" Apr 16 18:33:14.499173 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:14.499011 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8fd22549-8a71-4c5b-89f5-241942077e63-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-psccd\" (UID: \"8fd22549-8a71-4c5b-89f5-241942077e63\") " pod="openshift-monitoring/node-exporter-psccd" Apr 16 18:33:14.499173 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:14.499041 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/8fd22549-8a71-4c5b-89f5-241942077e63-node-exporter-wtmp\") pod \"node-exporter-psccd\" (UID: \"8fd22549-8a71-4c5b-89f5-241942077e63\") " pod="openshift-monitoring/node-exporter-psccd" Apr 16 18:33:14.499173 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:33:14.499046 2569 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Apr 16 18:33:14.499173 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:14.499068 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/8fd22549-8a71-4c5b-89f5-241942077e63-root\") pod \"node-exporter-psccd\" (UID: \"8fd22549-8a71-4c5b-89f5-241942077e63\") " pod="openshift-monitoring/node-exporter-psccd" Apr 16 18:33:14.499173 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:33:14.499118 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fd22549-8a71-4c5b-89f5-241942077e63-node-exporter-tls podName:8fd22549-8a71-4c5b-89f5-241942077e63 nodeName:}" failed. No retries permitted until 2026-04-16 18:33:14.999097771 +0000 UTC m=+178.977181193 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/8fd22549-8a71-4c5b-89f5-241942077e63-node-exporter-tls") pod "node-exporter-psccd" (UID: "8fd22549-8a71-4c5b-89f5-241942077e63") : secret "node-exporter-tls" not found Apr 16 18:33:14.499173 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:14.499125 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/8fd22549-8a71-4c5b-89f5-241942077e63-root\") pod \"node-exporter-psccd\" (UID: \"8fd22549-8a71-4c5b-89f5-241942077e63\") " pod="openshift-monitoring/node-exporter-psccd" Apr 16 18:33:14.499547 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:14.499199 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/8fd22549-8a71-4c5b-89f5-241942077e63-node-exporter-wtmp\") pod \"node-exporter-psccd\" (UID: \"8fd22549-8a71-4c5b-89f5-241942077e63\") " pod="openshift-monitoring/node-exporter-psccd" Apr 16 18:33:14.499547 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:14.499244 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/8fd22549-8a71-4c5b-89f5-241942077e63-node-exporter-textfile\") pod \"node-exporter-psccd\" (UID: \"8fd22549-8a71-4c5b-89f5-241942077e63\") " pod="openshift-monitoring/node-exporter-psccd" Apr 16 18:33:14.499547 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:14.499295 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c7nn2\" (UniqueName: \"kubernetes.io/projected/8fd22549-8a71-4c5b-89f5-241942077e63-kube-api-access-c7nn2\") pod \"node-exporter-psccd\" (UID: \"8fd22549-8a71-4c5b-89f5-241942077e63\") " pod="openshift-monitoring/node-exporter-psccd" Apr 16 18:33:14.499547 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:14.499376 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8fd22549-8a71-4c5b-89f5-241942077e63-sys\") pod \"node-exporter-psccd\" (UID: \"8fd22549-8a71-4c5b-89f5-241942077e63\") " pod="openshift-monitoring/node-exporter-psccd" Apr 16 18:33:14.499547 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:14.499436 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8fd22549-8a71-4c5b-89f5-241942077e63-sys\") pod \"node-exporter-psccd\" (UID: \"8fd22549-8a71-4c5b-89f5-241942077e63\") " pod="openshift-monitoring/node-exporter-psccd" Apr 16 18:33:14.499767 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:14.499567 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/8fd22549-8a71-4c5b-89f5-241942077e63-node-exporter-textfile\") pod \"node-exporter-psccd\" (UID: \"8fd22549-8a71-4c5b-89f5-241942077e63\") " pod="openshift-monitoring/node-exporter-psccd" Apr 16 18:33:14.499767 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:14.499603 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8fd22549-8a71-4c5b-89f5-241942077e63-metrics-client-ca\") pod \"node-exporter-psccd\" (UID: \"8fd22549-8a71-4c5b-89f5-241942077e63\") " pod="openshift-monitoring/node-exporter-psccd" Apr 16 18:33:14.499767 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:14.499670 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/8fd22549-8a71-4c5b-89f5-241942077e63-node-exporter-accelerators-collector-config\") pod \"node-exporter-psccd\" (UID: \"8fd22549-8a71-4c5b-89f5-241942077e63\") " pod="openshift-monitoring/node-exporter-psccd" Apr 16 18:33:14.501311 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:14.501294 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8fd22549-8a71-4c5b-89f5-241942077e63-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-psccd\" (UID: \"8fd22549-8a71-4c5b-89f5-241942077e63\") " pod="openshift-monitoring/node-exporter-psccd" Apr 16 18:33:14.514994 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:14.514965 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7nn2\" (UniqueName: \"kubernetes.io/projected/8fd22549-8a71-4c5b-89f5-241942077e63-kube-api-access-c7nn2\") pod \"node-exporter-psccd\" (UID: \"8fd22549-8a71-4c5b-89f5-241942077e63\") " pod="openshift-monitoring/node-exporter-psccd" Apr 16 18:33:15.002508 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:15.002462 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/8fd22549-8a71-4c5b-89f5-241942077e63-node-exporter-tls\") pod \"node-exporter-psccd\" (UID: \"8fd22549-8a71-4c5b-89f5-241942077e63\") " pod="openshift-monitoring/node-exporter-psccd" Apr 16 18:33:15.004712 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:15.004680 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/8fd22549-8a71-4c5b-89f5-241942077e63-node-exporter-tls\") pod \"node-exporter-psccd\" (UID: \"8fd22549-8a71-4c5b-89f5-241942077e63\") " pod="openshift-monitoring/node-exporter-psccd" Apr 16 18:33:15.188646 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:15.188615 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-wq9r5" Apr 16 18:33:15.292229 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:15.292154 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-psccd" Apr 16 18:33:15.300173 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:33:15.300142 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fd22549_8a71_4c5b_89f5_241942077e63.slice/crio-3164618d98bcfaaa02edfd884835c2c16e89b6bd36c7c2a2ab600b8de53c8b5a WatchSource:0}: Error finding container 3164618d98bcfaaa02edfd884835c2c16e89b6bd36c7c2a2ab600b8de53c8b5a: Status 404 returned error can't find the container with id 3164618d98bcfaaa02edfd884835c2c16e89b6bd36c7c2a2ab600b8de53c8b5a Apr 16 18:33:16.211966 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:16.211932 2569 generic.go:358] "Generic (PLEG): container finished" podID="8fd22549-8a71-4c5b-89f5-241942077e63" containerID="1ea8df8476a7e1eac57704150688a599c7881bdb27ce3377184157986f90db51" exitCode=0 Apr 16 18:33:16.212317 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:16.212008 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-psccd" event={"ID":"8fd22549-8a71-4c5b-89f5-241942077e63","Type":"ContainerDied","Data":"1ea8df8476a7e1eac57704150688a599c7881bdb27ce3377184157986f90db51"} Apr 16 18:33:16.212317 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:16.212042 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-psccd" event={"ID":"8fd22549-8a71-4c5b-89f5-241942077e63","Type":"ContainerStarted","Data":"3164618d98bcfaaa02edfd884835c2c16e89b6bd36c7c2a2ab600b8de53c8b5a"} Apr 16 18:33:17.216793 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:17.216759 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-psccd" event={"ID":"8fd22549-8a71-4c5b-89f5-241942077e63","Type":"ContainerStarted","Data":"7c17a2f90f85d333e1a0ab01e231f9fddb52128b353f79261014d5ad1b9171be"} Apr 16 18:33:17.216793 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:17.216800 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-psccd" event={"ID":"8fd22549-8a71-4c5b-89f5-241942077e63","Type":"ContainerStarted","Data":"ca373fa24af333ca98fd52337097bfbc3daf3c8a6b0b05160ac4276eb8d9393c"} Apr 16 18:33:17.235753 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:17.235710 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-psccd" podStartSLOduration=2.570072003 podStartE2EDuration="3.235696572s" podCreationTimestamp="2026-04-16 18:33:14 +0000 UTC" firstStartedPulling="2026-04-16 18:33:15.302062568 +0000 UTC m=+179.280145991" lastFinishedPulling="2026-04-16 18:33:15.96768713 +0000 UTC m=+179.945770560" observedRunningTime="2026-04-16 18:33:17.234208278 +0000 UTC m=+181.212291704" watchObservedRunningTime="2026-04-16 18:33:17.235696572 +0000 UTC m=+181.213780017" Apr 16 18:33:17.650072 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:17.649987 2569 patch_prober.go:28] interesting pod/image-registry-68699947db-2vcnw container/registry namespace/openshift-image-registry: Liveness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 16 18:33:17.650072 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:17.650036 2569 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-68699947db-2vcnw" podUID="e1716530-3a79-4ef5-bd3c-0909772664d6" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 18:33:19.156317 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:19.156280 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:33:21.738344 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:21.738298 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-68699947db-2vcnw"] Apr 16 18:33:46.756810 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:46.756732 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-image-registry/image-registry-68699947db-2vcnw" podUID="e1716530-3a79-4ef5-bd3c-0909772664d6" containerName="registry" containerID="cri-o://7042b5b61cac410973b5d358f75c6e790140638bc71cf65801d3443ae9112c34" gracePeriod=30 Apr 16 18:33:46.995503 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:46.995477 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:33:47.036865 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:47.036791 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e1716530-3a79-4ef5-bd3c-0909772664d6-installation-pull-secrets\") pod \"e1716530-3a79-4ef5-bd3c-0909772664d6\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " Apr 16 18:33:47.036865 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:47.036826 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-bound-sa-token\") pod \"e1716530-3a79-4ef5-bd3c-0909772664d6\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " Apr 16 18:33:47.036865 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:47.036866 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e1716530-3a79-4ef5-bd3c-0909772664d6-ca-trust-extracted\") pod \"e1716530-3a79-4ef5-bd3c-0909772664d6\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " Apr 16 18:33:47.037126 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:47.036883 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e1716530-3a79-4ef5-bd3c-0909772664d6-registry-certificates\") pod \"e1716530-3a79-4ef5-bd3c-0909772664d6\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " Apr 16 18:33:47.037126 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:47.036902 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-registry-tls\") pod \"e1716530-3a79-4ef5-bd3c-0909772664d6\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " Apr 16 18:33:47.037126 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:47.036931 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdkdk\" (UniqueName: \"kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-kube-api-access-zdkdk\") pod \"e1716530-3a79-4ef5-bd3c-0909772664d6\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " Apr 16 18:33:47.037126 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:47.036961 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/e1716530-3a79-4ef5-bd3c-0909772664d6-image-registry-private-configuration\") pod \"e1716530-3a79-4ef5-bd3c-0909772664d6\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " Apr 16 18:33:47.037126 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:47.036995 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e1716530-3a79-4ef5-bd3c-0909772664d6-trusted-ca\") pod \"e1716530-3a79-4ef5-bd3c-0909772664d6\" (UID: \"e1716530-3a79-4ef5-bd3c-0909772664d6\") " Apr 16 18:33:47.037411 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:47.037389 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1716530-3a79-4ef5-bd3c-0909772664d6-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "e1716530-3a79-4ef5-bd3c-0909772664d6" (UID: "e1716530-3a79-4ef5-bd3c-0909772664d6"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 18:33:47.037569 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:47.037531 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1716530-3a79-4ef5-bd3c-0909772664d6-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "e1716530-3a79-4ef5-bd3c-0909772664d6" (UID: "e1716530-3a79-4ef5-bd3c-0909772664d6"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 18:33:47.039926 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:47.039873 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-kube-api-access-zdkdk" (OuterVolumeSpecName: "kube-api-access-zdkdk") pod "e1716530-3a79-4ef5-bd3c-0909772664d6" (UID: "e1716530-3a79-4ef5-bd3c-0909772664d6"). InnerVolumeSpecName "kube-api-access-zdkdk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 18:33:47.039926 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:47.039899 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1716530-3a79-4ef5-bd3c-0909772664d6-image-registry-private-configuration" (OuterVolumeSpecName: "image-registry-private-configuration") pod "e1716530-3a79-4ef5-bd3c-0909772664d6" (UID: "e1716530-3a79-4ef5-bd3c-0909772664d6"). InnerVolumeSpecName "image-registry-private-configuration". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 18:33:47.040147 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:47.039966 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1716530-3a79-4ef5-bd3c-0909772664d6-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "e1716530-3a79-4ef5-bd3c-0909772664d6" (UID: "e1716530-3a79-4ef5-bd3c-0909772664d6"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 18:33:47.040147 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:47.039968 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "e1716530-3a79-4ef5-bd3c-0909772664d6" (UID: "e1716530-3a79-4ef5-bd3c-0909772664d6"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 18:33:47.040147 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:47.040019 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "e1716530-3a79-4ef5-bd3c-0909772664d6" (UID: "e1716530-3a79-4ef5-bd3c-0909772664d6"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 18:33:47.046345 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:47.046316 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1716530-3a79-4ef5-bd3c-0909772664d6-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "e1716530-3a79-4ef5-bd3c-0909772664d6" (UID: "e1716530-3a79-4ef5-bd3c-0909772664d6"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 18:33:47.138455 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:47.138401 2569 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e1716530-3a79-4ef5-bd3c-0909772664d6-installation-pull-secrets\") on node \"ip-10-0-132-14.ec2.internal\" DevicePath \"\"" Apr 16 18:33:47.138455 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:47.138447 2569 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-bound-sa-token\") on node \"ip-10-0-132-14.ec2.internal\" DevicePath \"\"" Apr 16 18:33:47.138455 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:47.138461 2569 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e1716530-3a79-4ef5-bd3c-0909772664d6-ca-trust-extracted\") on node \"ip-10-0-132-14.ec2.internal\" DevicePath \"\"" Apr 16 18:33:47.138688 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:47.138475 2569 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e1716530-3a79-4ef5-bd3c-0909772664d6-registry-certificates\") on node \"ip-10-0-132-14.ec2.internal\" DevicePath \"\"" Apr 16 18:33:47.138688 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:47.138488 2569 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-registry-tls\") on node \"ip-10-0-132-14.ec2.internal\" DevicePath \"\"" Apr 16 18:33:47.138688 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:47.138499 2569 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zdkdk\" (UniqueName: \"kubernetes.io/projected/e1716530-3a79-4ef5-bd3c-0909772664d6-kube-api-access-zdkdk\") on node \"ip-10-0-132-14.ec2.internal\" DevicePath \"\"" Apr 16 18:33:47.138688 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:47.138513 2569 reconciler_common.go:299] "Volume detached for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/e1716530-3a79-4ef5-bd3c-0909772664d6-image-registry-private-configuration\") on node \"ip-10-0-132-14.ec2.internal\" DevicePath \"\"" Apr 16 18:33:47.138688 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:47.138526 2569 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e1716530-3a79-4ef5-bd3c-0909772664d6-trusted-ca\") on node \"ip-10-0-132-14.ec2.internal\" DevicePath \"\"" Apr 16 18:33:47.294451 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:47.294364 2569 generic.go:358] "Generic (PLEG): container finished" podID="e1716530-3a79-4ef5-bd3c-0909772664d6" containerID="7042b5b61cac410973b5d358f75c6e790140638bc71cf65801d3443ae9112c34" exitCode=0 Apr 16 18:33:47.294451 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:47.294428 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-68699947db-2vcnw" event={"ID":"e1716530-3a79-4ef5-bd3c-0909772664d6","Type":"ContainerDied","Data":"7042b5b61cac410973b5d358f75c6e790140638bc71cf65801d3443ae9112c34"} Apr 16 18:33:47.294451 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:47.294454 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-68699947db-2vcnw" event={"ID":"e1716530-3a79-4ef5-bd3c-0909772664d6","Type":"ContainerDied","Data":"39a68afd02b0296d9c173d7b5d36754a53ed44e3914282c234aed2c7611bf713"} Apr 16 18:33:47.294722 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:47.294468 2569 scope.go:117] "RemoveContainer" containerID="7042b5b61cac410973b5d358f75c6e790140638bc71cf65801d3443ae9112c34" Apr 16 18:33:47.294722 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:47.294486 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-68699947db-2vcnw" Apr 16 18:33:47.302652 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:47.302623 2569 scope.go:117] "RemoveContainer" containerID="7042b5b61cac410973b5d358f75c6e790140638bc71cf65801d3443ae9112c34" Apr 16 18:33:47.302902 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:33:47.302879 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7042b5b61cac410973b5d358f75c6e790140638bc71cf65801d3443ae9112c34\": container with ID starting with 7042b5b61cac410973b5d358f75c6e790140638bc71cf65801d3443ae9112c34 not found: ID does not exist" containerID="7042b5b61cac410973b5d358f75c6e790140638bc71cf65801d3443ae9112c34" Apr 16 18:33:47.302960 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:47.302911 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7042b5b61cac410973b5d358f75c6e790140638bc71cf65801d3443ae9112c34"} err="failed to get container status \"7042b5b61cac410973b5d358f75c6e790140638bc71cf65801d3443ae9112c34\": rpc error: code = NotFound desc = could not find container \"7042b5b61cac410973b5d358f75c6e790140638bc71cf65801d3443ae9112c34\": container with ID starting with 7042b5b61cac410973b5d358f75c6e790140638bc71cf65801d3443ae9112c34 not found: ID does not exist" Apr 16 18:33:47.313112 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:47.313090 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-68699947db-2vcnw"] Apr 16 18:33:47.316603 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:47.316587 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-68699947db-2vcnw"] Apr 16 18:33:48.647775 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:48.647734 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1716530-3a79-4ef5-bd3c-0909772664d6" path="/var/lib/kubelet/pods/e1716530-3a79-4ef5-bd3c-0909772664d6/volumes" Apr 16 18:33:49.770109 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:49.770075 2569 prober.go:120] "Probe failed" probeType="Liveness" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-58b579794f-dprjc" podUID="e8c054fa-1f67-4b74-9059-ed94490a803e" containerName="service-proxy" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 16 18:33:59.769803 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:33:59.769764 2569 prober.go:120] "Probe failed" probeType="Liveness" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-58b579794f-dprjc" podUID="e8c054fa-1f67-4b74-9059-ed94490a803e" containerName="service-proxy" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 16 18:34:09.769974 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:34:09.769934 2569 prober.go:120] "Probe failed" probeType="Liveness" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-58b579794f-dprjc" podUID="e8c054fa-1f67-4b74-9059-ed94490a803e" containerName="service-proxy" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 16 18:34:09.770379 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:34:09.770015 2569 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-58b579794f-dprjc" Apr 16 18:34:09.770511 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:34:09.770494 2569 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="service-proxy" containerStatusID={"Type":"cri-o","ID":"0844209e68ea604a570dc0be4d58f0ce3acef272b08cd6c8323a07e6add0a9c9"} pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-58b579794f-dprjc" containerMessage="Container service-proxy failed liveness probe, will be restarted" Apr 16 18:34:09.770549 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:34:09.770532 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-58b579794f-dprjc" podUID="e8c054fa-1f67-4b74-9059-ed94490a803e" containerName="service-proxy" containerID="cri-o://0844209e68ea604a570dc0be4d58f0ce3acef272b08cd6c8323a07e6add0a9c9" gracePeriod=30 Apr 16 18:34:10.354279 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:34:10.354232 2569 generic.go:358] "Generic (PLEG): container finished" podID="e8c054fa-1f67-4b74-9059-ed94490a803e" containerID="0844209e68ea604a570dc0be4d58f0ce3acef272b08cd6c8323a07e6add0a9c9" exitCode=2 Apr 16 18:34:10.354476 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:34:10.354292 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-58b579794f-dprjc" event={"ID":"e8c054fa-1f67-4b74-9059-ed94490a803e","Type":"ContainerDied","Data":"0844209e68ea604a570dc0be4d58f0ce3acef272b08cd6c8323a07e6add0a9c9"} Apr 16 18:34:10.354476 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:34:10.354321 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-58b579794f-dprjc" event={"ID":"e8c054fa-1f67-4b74-9059-ed94490a803e","Type":"ContainerStarted","Data":"d3f7e81245990b57a9c98a7c7a070c4120a9ce1522a54648326f44d1bdb1f301"} Apr 16 18:34:11.408726 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:34:11.408698 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-psccd_8fd22549-8a71-4c5b-89f5-241942077e63/init-textfile/0.log" Apr 16 18:34:11.595323 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:34:11.595292 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-psccd_8fd22549-8a71-4c5b-89f5-241942077e63/node-exporter/0.log" Apr 16 18:34:11.795351 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:34:11.795311 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-psccd_8fd22549-8a71-4c5b-89f5-241942077e63/kube-rbac-proxy/0.log" Apr 16 18:34:15.794930 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:34:15.794900 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-console_networking-console-plugin-5cb6cf4cb4-p8jnc_9d04db45-c40a-4deb-a86e-03e77a3b560e/networking-console-plugin/0.log" Apr 16 18:34:17.595002 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:34:17.594975 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-z87hx_8db23076-0658-4e7c-aab7-30f06e2174dc/serve-healthcheck-canary/0.log" Apr 16 18:34:28.443502 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:34:28.443461 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dc3c5cbb-7bc5-4228-88bf-021a899d1e57-metrics-certs\") pod \"network-metrics-daemon-kk4tm\" (UID: \"dc3c5cbb-7bc5-4228-88bf-021a899d1e57\") " pod="openshift-multus/network-metrics-daemon-kk4tm" Apr 16 18:34:28.445722 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:34:28.445700 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dc3c5cbb-7bc5-4228-88bf-021a899d1e57-metrics-certs\") pod \"network-metrics-daemon-kk4tm\" (UID: \"dc3c5cbb-7bc5-4228-88bf-021a899d1e57\") " pod="openshift-multus/network-metrics-daemon-kk4tm" Apr 16 18:34:28.546286 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:34:28.546253 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-kxnrw\"" Apr 16 18:34:28.554654 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:34:28.554635 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kk4tm" Apr 16 18:34:28.669343 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:34:28.669299 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kk4tm"] Apr 16 18:34:28.674698 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:34:28.674571 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc3c5cbb_7bc5_4228_88bf_021a899d1e57.slice/crio-af00f96b112447c4d82106b9d36e04511b439e895de48060a349a04c3503a30e WatchSource:0}: Error finding container af00f96b112447c4d82106b9d36e04511b439e895de48060a349a04c3503a30e: Status 404 returned error can't find the container with id af00f96b112447c4d82106b9d36e04511b439e895de48060a349a04c3503a30e Apr 16 18:34:29.402274 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:34:29.402225 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kk4tm" event={"ID":"dc3c5cbb-7bc5-4228-88bf-021a899d1e57","Type":"ContainerStarted","Data":"af00f96b112447c4d82106b9d36e04511b439e895de48060a349a04c3503a30e"} Apr 16 18:34:30.405828 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:34:30.405783 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kk4tm" event={"ID":"dc3c5cbb-7bc5-4228-88bf-021a899d1e57","Type":"ContainerStarted","Data":"7bebf045218163587f9f452630860ec03a9b5506400a9082d5f65542cc0608d1"} Apr 16 18:34:30.405828 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:34:30.405824 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kk4tm" event={"ID":"dc3c5cbb-7bc5-4228-88bf-021a899d1e57","Type":"ContainerStarted","Data":"04498d1aee08cbb50fab9414ee6cc82d4970376f83c9931d8bdeac90a2d68a5b"} Apr 16 18:34:30.420427 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:34:30.420378 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-kk4tm" podStartSLOduration=253.565867368 podStartE2EDuration="4m14.420362814s" podCreationTimestamp="2026-04-16 18:30:16 +0000 UTC" firstStartedPulling="2026-04-16 18:34:28.676955479 +0000 UTC m=+252.655038903" lastFinishedPulling="2026-04-16 18:34:29.531450923 +0000 UTC m=+253.509534349" observedRunningTime="2026-04-16 18:34:30.419693076 +0000 UTC m=+254.397776520" watchObservedRunningTime="2026-04-16 18:34:30.420362814 +0000 UTC m=+254.398446258" Apr 16 18:35:16.518194 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:35:16.518161 2569 kubelet.go:1628] "Image garbage collection succeeded" Apr 16 18:38:34.249562 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:38:34.249530 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/seaweedfs-86cc847c5c-ss4br"] Apr 16 18:38:34.250038 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:38:34.249827 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e1716530-3a79-4ef5-bd3c-0909772664d6" containerName="registry" Apr 16 18:38:34.250038 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:38:34.249862 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1716530-3a79-4ef5-bd3c-0909772664d6" containerName="registry" Apr 16 18:38:34.250038 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:38:34.249911 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="e1716530-3a79-4ef5-bd3c-0909772664d6" containerName="registry" Apr 16 18:38:34.252706 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:38:34.252690 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/seaweedfs-86cc847c5c-ss4br" Apr 16 18:38:34.254530 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:38:34.254506 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"mlpipeline-s3-artifact\"" Apr 16 18:38:34.254664 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:38:34.254509 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"default-dockercfg-rw2ll\"" Apr 16 18:38:34.254664 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:38:34.254575 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"openshift-service-ca.crt\"" Apr 16 18:38:34.254973 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:38:34.254960 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"kube-root-ca.crt\"" Apr 16 18:38:34.260008 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:38:34.259987 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/seaweedfs-86cc847c5c-ss4br"] Apr 16 18:38:34.260173 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:38:34.260037 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwhhd\" (UniqueName: \"kubernetes.io/projected/d386611a-e077-450b-b47b-13f74a58b0b6-kube-api-access-vwhhd\") pod \"seaweedfs-86cc847c5c-ss4br\" (UID: \"d386611a-e077-450b-b47b-13f74a58b0b6\") " pod="kserve/seaweedfs-86cc847c5c-ss4br" Apr 16 18:38:34.260276 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:38:34.260242 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/d386611a-e077-450b-b47b-13f74a58b0b6-data\") pod \"seaweedfs-86cc847c5c-ss4br\" (UID: \"d386611a-e077-450b-b47b-13f74a58b0b6\") " pod="kserve/seaweedfs-86cc847c5c-ss4br" Apr 16 18:38:34.360553 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:38:34.360520 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vwhhd\" (UniqueName: \"kubernetes.io/projected/d386611a-e077-450b-b47b-13f74a58b0b6-kube-api-access-vwhhd\") pod \"seaweedfs-86cc847c5c-ss4br\" (UID: \"d386611a-e077-450b-b47b-13f74a58b0b6\") " pod="kserve/seaweedfs-86cc847c5c-ss4br" Apr 16 18:38:34.360717 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:38:34.360575 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/d386611a-e077-450b-b47b-13f74a58b0b6-data\") pod \"seaweedfs-86cc847c5c-ss4br\" (UID: \"d386611a-e077-450b-b47b-13f74a58b0b6\") " pod="kserve/seaweedfs-86cc847c5c-ss4br" Apr 16 18:38:34.360874 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:38:34.360860 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/d386611a-e077-450b-b47b-13f74a58b0b6-data\") pod \"seaweedfs-86cc847c5c-ss4br\" (UID: \"d386611a-e077-450b-b47b-13f74a58b0b6\") " pod="kserve/seaweedfs-86cc847c5c-ss4br" Apr 16 18:38:34.367748 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:38:34.367727 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwhhd\" (UniqueName: \"kubernetes.io/projected/d386611a-e077-450b-b47b-13f74a58b0b6-kube-api-access-vwhhd\") pod \"seaweedfs-86cc847c5c-ss4br\" (UID: \"d386611a-e077-450b-b47b-13f74a58b0b6\") " pod="kserve/seaweedfs-86cc847c5c-ss4br" Apr 16 18:38:34.563006 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:38:34.562921 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/seaweedfs-86cc847c5c-ss4br" Apr 16 18:38:34.675144 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:38:34.675116 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/seaweedfs-86cc847c5c-ss4br"] Apr 16 18:38:34.677971 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:38:34.677942 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd386611a_e077_450b_b47b_13f74a58b0b6.slice/crio-b3f2866747f564389b937b162f525cf63f614aac9dfa1011ef0e588218cad9c1 WatchSource:0}: Error finding container b3f2866747f564389b937b162f525cf63f614aac9dfa1011ef0e588218cad9c1: Status 404 returned error can't find the container with id b3f2866747f564389b937b162f525cf63f614aac9dfa1011ef0e588218cad9c1 Apr 16 18:38:34.679279 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:38:34.679259 2569 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 18:38:35.012509 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:38:35.012468 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/seaweedfs-86cc847c5c-ss4br" event={"ID":"d386611a-e077-450b-b47b-13f74a58b0b6","Type":"ContainerStarted","Data":"b3f2866747f564389b937b162f525cf63f614aac9dfa1011ef0e588218cad9c1"} Apr 16 18:38:38.024987 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:38:38.024946 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/seaweedfs-86cc847c5c-ss4br" event={"ID":"d386611a-e077-450b-b47b-13f74a58b0b6","Type":"ContainerStarted","Data":"ffbf91e0f68bd6aff1f26359fb807a0a486c7590ba031bf39c95c8343ff0c711"} Apr 16 18:38:38.025454 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:38:38.025060 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/seaweedfs-86cc847c5c-ss4br" Apr 16 18:38:38.039395 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:38:38.039326 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/seaweedfs-86cc847c5c-ss4br" podStartSLOduration=1.501476209 podStartE2EDuration="4.039312236s" podCreationTimestamp="2026-04-16 18:38:34 +0000 UTC" firstStartedPulling="2026-04-16 18:38:34.679438164 +0000 UTC m=+498.657521587" lastFinishedPulling="2026-04-16 18:38:37.217274188 +0000 UTC m=+501.195357614" observedRunningTime="2026-04-16 18:38:38.037829045 +0000 UTC m=+502.015912489" watchObservedRunningTime="2026-04-16 18:38:38.039312236 +0000 UTC m=+502.017395681" Apr 16 18:38:44.029771 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:38:44.029741 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/seaweedfs-86cc847c5c-ss4br" Apr 16 18:39:44.413370 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:39:44.413322 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/model-serving-api-86f7b4b499-vtkth"] Apr 16 18:39:44.415383 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:39:44.415368 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/model-serving-api-86f7b4b499-vtkth" Apr 16 18:39:44.417793 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:39:44.417776 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"model-serving-api-tls\"" Apr 16 18:39:44.417793 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:39:44.417778 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"model-serving-api-dockercfg-gktvf\"" Apr 16 18:39:44.425731 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:39:44.425709 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/model-serving-api-86f7b4b499-vtkth"] Apr 16 18:39:44.429352 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:39:44.429311 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/odh-model-controller-696fc77849-jb782"] Apr 16 18:39:44.431278 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:39:44.431260 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/odh-model-controller-696fc77849-jb782" Apr 16 18:39:44.433057 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:39:44.433041 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"odh-model-controller-webhook-cert\"" Apr 16 18:39:44.433057 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:39:44.433054 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"odh-model-controller-dockercfg-cj7q6\"" Apr 16 18:39:44.441939 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:39:44.441908 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/odh-model-controller-696fc77849-jb782"] Apr 16 18:39:44.566103 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:39:44.566066 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certs\" (UniqueName: \"kubernetes.io/secret/97943f8c-7dba-46b1-ad54-1b60669b36e6-tls-certs\") pod \"model-serving-api-86f7b4b499-vtkth\" (UID: \"97943f8c-7dba-46b1-ad54-1b60669b36e6\") " pod="kserve/model-serving-api-86f7b4b499-vtkth" Apr 16 18:39:44.566289 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:39:44.566120 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8f05ea0c-23c2-4221-a337-1a09fd938881-cert\") pod \"odh-model-controller-696fc77849-jb782\" (UID: \"8f05ea0c-23c2-4221-a337-1a09fd938881\") " pod="kserve/odh-model-controller-696fc77849-jb782" Apr 16 18:39:44.566289 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:39:44.566148 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc58p\" (UniqueName: \"kubernetes.io/projected/8f05ea0c-23c2-4221-a337-1a09fd938881-kube-api-access-lc58p\") pod \"odh-model-controller-696fc77849-jb782\" (UID: \"8f05ea0c-23c2-4221-a337-1a09fd938881\") " pod="kserve/odh-model-controller-696fc77849-jb782" Apr 16 18:39:44.566289 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:39:44.566176 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt2kk\" (UniqueName: \"kubernetes.io/projected/97943f8c-7dba-46b1-ad54-1b60669b36e6-kube-api-access-lt2kk\") pod \"model-serving-api-86f7b4b499-vtkth\" (UID: \"97943f8c-7dba-46b1-ad54-1b60669b36e6\") " pod="kserve/model-serving-api-86f7b4b499-vtkth" Apr 16 18:39:44.667436 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:39:44.667360 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-certs\" (UniqueName: \"kubernetes.io/secret/97943f8c-7dba-46b1-ad54-1b60669b36e6-tls-certs\") pod \"model-serving-api-86f7b4b499-vtkth\" (UID: \"97943f8c-7dba-46b1-ad54-1b60669b36e6\") " pod="kserve/model-serving-api-86f7b4b499-vtkth" Apr 16 18:39:44.667436 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:39:44.667418 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8f05ea0c-23c2-4221-a337-1a09fd938881-cert\") pod \"odh-model-controller-696fc77849-jb782\" (UID: \"8f05ea0c-23c2-4221-a337-1a09fd938881\") " pod="kserve/odh-model-controller-696fc77849-jb782" Apr 16 18:39:44.667610 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:39:44.667451 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lc58p\" (UniqueName: \"kubernetes.io/projected/8f05ea0c-23c2-4221-a337-1a09fd938881-kube-api-access-lc58p\") pod \"odh-model-controller-696fc77849-jb782\" (UID: \"8f05ea0c-23c2-4221-a337-1a09fd938881\") " pod="kserve/odh-model-controller-696fc77849-jb782" Apr 16 18:39:44.667610 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:39:44.667470 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lt2kk\" (UniqueName: \"kubernetes.io/projected/97943f8c-7dba-46b1-ad54-1b60669b36e6-kube-api-access-lt2kk\") pod \"model-serving-api-86f7b4b499-vtkth\" (UID: \"97943f8c-7dba-46b1-ad54-1b60669b36e6\") " pod="kserve/model-serving-api-86f7b4b499-vtkth" Apr 16 18:39:44.669812 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:39:44.669788 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-certs\" (UniqueName: \"kubernetes.io/secret/97943f8c-7dba-46b1-ad54-1b60669b36e6-tls-certs\") pod \"model-serving-api-86f7b4b499-vtkth\" (UID: \"97943f8c-7dba-46b1-ad54-1b60669b36e6\") " pod="kserve/model-serving-api-86f7b4b499-vtkth" Apr 16 18:39:44.669929 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:39:44.669796 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8f05ea0c-23c2-4221-a337-1a09fd938881-cert\") pod \"odh-model-controller-696fc77849-jb782\" (UID: \"8f05ea0c-23c2-4221-a337-1a09fd938881\") " pod="kserve/odh-model-controller-696fc77849-jb782" Apr 16 18:39:44.674861 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:39:44.674835 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc58p\" (UniqueName: \"kubernetes.io/projected/8f05ea0c-23c2-4221-a337-1a09fd938881-kube-api-access-lc58p\") pod \"odh-model-controller-696fc77849-jb782\" (UID: \"8f05ea0c-23c2-4221-a337-1a09fd938881\") " pod="kserve/odh-model-controller-696fc77849-jb782" Apr 16 18:39:44.674861 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:39:44.674845 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt2kk\" (UniqueName: \"kubernetes.io/projected/97943f8c-7dba-46b1-ad54-1b60669b36e6-kube-api-access-lt2kk\") pod \"model-serving-api-86f7b4b499-vtkth\" (UID: \"97943f8c-7dba-46b1-ad54-1b60669b36e6\") " pod="kserve/model-serving-api-86f7b4b499-vtkth" Apr 16 18:39:44.724849 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:39:44.724816 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/model-serving-api-86f7b4b499-vtkth" Apr 16 18:39:44.740540 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:39:44.740518 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/odh-model-controller-696fc77849-jb782" Apr 16 18:39:44.856152 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:39:44.856125 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/model-serving-api-86f7b4b499-vtkth"] Apr 16 18:39:44.859791 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:39:44.859761 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97943f8c_7dba_46b1_ad54_1b60669b36e6.slice/crio-d1796d358bb5403b2da9102fdc2262bb37fc3dd9baaf6525363b00e60841094c WatchSource:0}: Error finding container d1796d358bb5403b2da9102fdc2262bb37fc3dd9baaf6525363b00e60841094c: Status 404 returned error can't find the container with id d1796d358bb5403b2da9102fdc2262bb37fc3dd9baaf6525363b00e60841094c Apr 16 18:39:44.874276 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:39:44.874249 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/odh-model-controller-696fc77849-jb782"] Apr 16 18:39:44.877229 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:39:44.877201 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f05ea0c_23c2_4221_a337_1a09fd938881.slice/crio-57f3104cfd70ba461079488f041c8425d5e186ca4b56e4d068dad73a0e19d0d7 WatchSource:0}: Error finding container 57f3104cfd70ba461079488f041c8425d5e186ca4b56e4d068dad73a0e19d0d7: Status 404 returned error can't find the container with id 57f3104cfd70ba461079488f041c8425d5e186ca4b56e4d068dad73a0e19d0d7 Apr 16 18:39:45.200164 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:39:45.200127 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/model-serving-api-86f7b4b499-vtkth" event={"ID":"97943f8c-7dba-46b1-ad54-1b60669b36e6","Type":"ContainerStarted","Data":"d1796d358bb5403b2da9102fdc2262bb37fc3dd9baaf6525363b00e60841094c"} Apr 16 18:39:45.201173 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:39:45.201147 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/odh-model-controller-696fc77849-jb782" event={"ID":"8f05ea0c-23c2-4221-a337-1a09fd938881","Type":"ContainerStarted","Data":"57f3104cfd70ba461079488f041c8425d5e186ca4b56e4d068dad73a0e19d0d7"} Apr 16 18:39:49.214352 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:39:49.214310 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/odh-model-controller-696fc77849-jb782" event={"ID":"8f05ea0c-23c2-4221-a337-1a09fd938881","Type":"ContainerStarted","Data":"4d93847a2493c51dc95db4b852efdb45a39e39514e09630a7960c4f0e44827d3"} Apr 16 18:39:49.214796 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:39:49.214448 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/odh-model-controller-696fc77849-jb782" Apr 16 18:39:49.215694 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:39:49.215671 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/model-serving-api-86f7b4b499-vtkth" event={"ID":"97943f8c-7dba-46b1-ad54-1b60669b36e6","Type":"ContainerStarted","Data":"4e9de01319b5969ff5ee99443d0c3dd0495f2aeee0c464e2ab3f640ef8926423"} Apr 16 18:39:49.215843 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:39:49.215828 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/model-serving-api-86f7b4b499-vtkth" Apr 16 18:39:49.230766 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:39:49.230725 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/odh-model-controller-696fc77849-jb782" podStartSLOduration=1.685709122 podStartE2EDuration="5.230712693s" podCreationTimestamp="2026-04-16 18:39:44 +0000 UTC" firstStartedPulling="2026-04-16 18:39:44.87846099 +0000 UTC m=+568.856544412" lastFinishedPulling="2026-04-16 18:39:48.423464557 +0000 UTC m=+572.401547983" observedRunningTime="2026-04-16 18:39:49.229283805 +0000 UTC m=+573.207367250" watchObservedRunningTime="2026-04-16 18:39:49.230712693 +0000 UTC m=+573.208796137" Apr 16 18:39:49.243178 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:39:49.243116 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/model-serving-api-86f7b4b499-vtkth" podStartSLOduration=1.685518633 podStartE2EDuration="5.243104559s" podCreationTimestamp="2026-04-16 18:39:44 +0000 UTC" firstStartedPulling="2026-04-16 18:39:44.861533974 +0000 UTC m=+568.839617397" lastFinishedPulling="2026-04-16 18:39:48.41911989 +0000 UTC m=+572.397203323" observedRunningTime="2026-04-16 18:39:49.242915812 +0000 UTC m=+573.220999279" watchObservedRunningTime="2026-04-16 18:39:49.243104559 +0000 UTC m=+573.221188003" Apr 16 18:40:00.220222 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:00.220188 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/odh-model-controller-696fc77849-jb782" Apr 16 18:40:00.222212 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:00.222191 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/model-serving-api-86f7b4b499-vtkth" Apr 16 18:40:01.120589 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:01.120558 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/s3-init-f5n5v"] Apr 16 18:40:01.123546 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:01.123530 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/s3-init-f5n5v" Apr 16 18:40:01.129297 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:01.129274 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/s3-init-f5n5v"] Apr 16 18:40:01.294831 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:01.294798 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqzhd\" (UniqueName: \"kubernetes.io/projected/69095eac-fc69-48f2-a272-d16b50b3b10c-kube-api-access-xqzhd\") pod \"s3-init-f5n5v\" (UID: \"69095eac-fc69-48f2-a272-d16b50b3b10c\") " pod="kserve/s3-init-f5n5v" Apr 16 18:40:01.396008 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:01.395923 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xqzhd\" (UniqueName: \"kubernetes.io/projected/69095eac-fc69-48f2-a272-d16b50b3b10c-kube-api-access-xqzhd\") pod \"s3-init-f5n5v\" (UID: \"69095eac-fc69-48f2-a272-d16b50b3b10c\") " pod="kserve/s3-init-f5n5v" Apr 16 18:40:01.403893 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:01.403870 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqzhd\" (UniqueName: \"kubernetes.io/projected/69095eac-fc69-48f2-a272-d16b50b3b10c-kube-api-access-xqzhd\") pod \"s3-init-f5n5v\" (UID: \"69095eac-fc69-48f2-a272-d16b50b3b10c\") " pod="kserve/s3-init-f5n5v" Apr 16 18:40:01.432992 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:01.432950 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/s3-init-f5n5v" Apr 16 18:40:01.545078 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:01.545047 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/s3-init-f5n5v"] Apr 16 18:40:01.548741 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:40:01.548706 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69095eac_fc69_48f2_a272_d16b50b3b10c.slice/crio-ab617faea9730337b5520d6eec41153d681e38b1b41a96c448dea30441e1c3f1 WatchSource:0}: Error finding container ab617faea9730337b5520d6eec41153d681e38b1b41a96c448dea30441e1c3f1: Status 404 returned error can't find the container with id ab617faea9730337b5520d6eec41153d681e38b1b41a96c448dea30441e1c3f1 Apr 16 18:40:02.251710 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:02.251632 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/s3-init-f5n5v" event={"ID":"69095eac-fc69-48f2-a272-d16b50b3b10c","Type":"ContainerStarted","Data":"ab617faea9730337b5520d6eec41153d681e38b1b41a96c448dea30441e1c3f1"} Apr 16 18:40:06.264940 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:06.264889 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/s3-init-f5n5v" event={"ID":"69095eac-fc69-48f2-a272-d16b50b3b10c","Type":"ContainerStarted","Data":"12433efaed4a5ad3755fe3c687906789f3187c65d00e5878290eeb4cf6c86c4b"} Apr 16 18:40:06.279273 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:06.279219 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/s3-init-f5n5v" podStartSLOduration=0.908893931 podStartE2EDuration="5.279201398s" podCreationTimestamp="2026-04-16 18:40:01 +0000 UTC" firstStartedPulling="2026-04-16 18:40:01.550598094 +0000 UTC m=+585.528681520" lastFinishedPulling="2026-04-16 18:40:05.920905558 +0000 UTC m=+589.898988987" observedRunningTime="2026-04-16 18:40:06.278485846 +0000 UTC m=+590.256569293" watchObservedRunningTime="2026-04-16 18:40:06.279201398 +0000 UTC m=+590.257284847" Apr 16 18:40:09.273299 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:09.273213 2569 generic.go:358] "Generic (PLEG): container finished" podID="69095eac-fc69-48f2-a272-d16b50b3b10c" containerID="12433efaed4a5ad3755fe3c687906789f3187c65d00e5878290eeb4cf6c86c4b" exitCode=0 Apr 16 18:40:09.273299 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:09.273285 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/s3-init-f5n5v" event={"ID":"69095eac-fc69-48f2-a272-d16b50b3b10c","Type":"ContainerDied","Data":"12433efaed4a5ad3755fe3c687906789f3187c65d00e5878290eeb4cf6c86c4b"} Apr 16 18:40:10.404297 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:10.404275 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/s3-init-f5n5v" Apr 16 18:40:10.465029 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:10.464987 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqzhd\" (UniqueName: \"kubernetes.io/projected/69095eac-fc69-48f2-a272-d16b50b3b10c-kube-api-access-xqzhd\") pod \"69095eac-fc69-48f2-a272-d16b50b3b10c\" (UID: \"69095eac-fc69-48f2-a272-d16b50b3b10c\") " Apr 16 18:40:10.467118 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:10.467095 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69095eac-fc69-48f2-a272-d16b50b3b10c-kube-api-access-xqzhd" (OuterVolumeSpecName: "kube-api-access-xqzhd") pod "69095eac-fc69-48f2-a272-d16b50b3b10c" (UID: "69095eac-fc69-48f2-a272-d16b50b3b10c"). InnerVolumeSpecName "kube-api-access-xqzhd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 18:40:10.565767 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:10.565676 2569 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xqzhd\" (UniqueName: \"kubernetes.io/projected/69095eac-fc69-48f2-a272-d16b50b3b10c-kube-api-access-xqzhd\") on node \"ip-10-0-132-14.ec2.internal\" DevicePath \"\"" Apr 16 18:40:11.280010 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:11.279974 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/s3-init-f5n5v" event={"ID":"69095eac-fc69-48f2-a272-d16b50b3b10c","Type":"ContainerDied","Data":"ab617faea9730337b5520d6eec41153d681e38b1b41a96c448dea30441e1c3f1"} Apr 16 18:40:11.280010 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:11.280007 2569 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab617faea9730337b5520d6eec41153d681e38b1b41a96c448dea30441e1c3f1" Apr 16 18:40:11.280214 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:11.280029 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/s3-init-f5n5v" Apr 16 18:40:12.070076 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:12.070043 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/seaweedfs-tls-custom-ddd4dbfd-fzlc4"] Apr 16 18:40:12.070477 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:12.070297 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="69095eac-fc69-48f2-a272-d16b50b3b10c" containerName="s3-init" Apr 16 18:40:12.070477 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:12.070308 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="69095eac-fc69-48f2-a272-d16b50b3b10c" containerName="s3-init" Apr 16 18:40:12.070477 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:12.070367 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="69095eac-fc69-48f2-a272-d16b50b3b10c" containerName="s3-init" Apr 16 18:40:12.073293 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:12.073271 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/seaweedfs-tls-custom-ddd4dbfd-fzlc4" Apr 16 18:40:12.074431 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:12.074408 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/140b250b-9c29-4189-86ad-d71da2a3c6db-data\") pod \"seaweedfs-tls-custom-ddd4dbfd-fzlc4\" (UID: \"140b250b-9c29-4189-86ad-d71da2a3c6db\") " pod="kserve/seaweedfs-tls-custom-ddd4dbfd-fzlc4" Apr 16 18:40:12.074531 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:12.074459 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slhxw\" (UniqueName: \"kubernetes.io/projected/140b250b-9c29-4189-86ad-d71da2a3c6db-kube-api-access-slhxw\") pod \"seaweedfs-tls-custom-ddd4dbfd-fzlc4\" (UID: \"140b250b-9c29-4189-86ad-d71da2a3c6db\") " pod="kserve/seaweedfs-tls-custom-ddd4dbfd-fzlc4" Apr 16 18:40:12.075133 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:12.075109 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"seaweedfs-tls-custom-artifact\"" Apr 16 18:40:12.080560 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:12.080540 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/seaweedfs-tls-custom-ddd4dbfd-fzlc4"] Apr 16 18:40:12.175093 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:12.175056 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-slhxw\" (UniqueName: \"kubernetes.io/projected/140b250b-9c29-4189-86ad-d71da2a3c6db-kube-api-access-slhxw\") pod \"seaweedfs-tls-custom-ddd4dbfd-fzlc4\" (UID: \"140b250b-9c29-4189-86ad-d71da2a3c6db\") " pod="kserve/seaweedfs-tls-custom-ddd4dbfd-fzlc4" Apr 16 18:40:12.175093 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:12.175098 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/140b250b-9c29-4189-86ad-d71da2a3c6db-data\") pod \"seaweedfs-tls-custom-ddd4dbfd-fzlc4\" (UID: \"140b250b-9c29-4189-86ad-d71da2a3c6db\") " pod="kserve/seaweedfs-tls-custom-ddd4dbfd-fzlc4" Apr 16 18:40:12.175426 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:12.175410 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/140b250b-9c29-4189-86ad-d71da2a3c6db-data\") pod \"seaweedfs-tls-custom-ddd4dbfd-fzlc4\" (UID: \"140b250b-9c29-4189-86ad-d71da2a3c6db\") " pod="kserve/seaweedfs-tls-custom-ddd4dbfd-fzlc4" Apr 16 18:40:12.183999 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:12.183967 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-slhxw\" (UniqueName: \"kubernetes.io/projected/140b250b-9c29-4189-86ad-d71da2a3c6db-kube-api-access-slhxw\") pod \"seaweedfs-tls-custom-ddd4dbfd-fzlc4\" (UID: \"140b250b-9c29-4189-86ad-d71da2a3c6db\") " pod="kserve/seaweedfs-tls-custom-ddd4dbfd-fzlc4" Apr 16 18:40:12.382793 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:12.382706 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/seaweedfs-tls-custom-ddd4dbfd-fzlc4" Apr 16 18:40:12.495654 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:12.495624 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/seaweedfs-tls-custom-ddd4dbfd-fzlc4"] Apr 16 18:40:12.498493 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:40:12.498467 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod140b250b_9c29_4189_86ad_d71da2a3c6db.slice/crio-3c7772cc16fd7d03939b79bbc2d1e996a8b02e7e3da542fa03ae44bf5339d41d WatchSource:0}: Error finding container 3c7772cc16fd7d03939b79bbc2d1e996a8b02e7e3da542fa03ae44bf5339d41d: Status 404 returned error can't find the container with id 3c7772cc16fd7d03939b79bbc2d1e996a8b02e7e3da542fa03ae44bf5339d41d Apr 16 18:40:13.286540 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:13.286507 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/seaweedfs-tls-custom-ddd4dbfd-fzlc4" event={"ID":"140b250b-9c29-4189-86ad-d71da2a3c6db","Type":"ContainerStarted","Data":"a434ba7c943affa91bf058ba1d0293f9e0ce90150aff7d79401d0f6c9d83c5db"} Apr 16 18:40:13.286540 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:13.286545 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/seaweedfs-tls-custom-ddd4dbfd-fzlc4" event={"ID":"140b250b-9c29-4189-86ad-d71da2a3c6db","Type":"ContainerStarted","Data":"3c7772cc16fd7d03939b79bbc2d1e996a8b02e7e3da542fa03ae44bf5339d41d"} Apr 16 18:40:13.301067 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:13.301009 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/seaweedfs-tls-custom-ddd4dbfd-fzlc4" podStartSLOduration=1.005336966 podStartE2EDuration="1.300993206s" podCreationTimestamp="2026-04-16 18:40:12 +0000 UTC" firstStartedPulling="2026-04-16 18:40:12.499781494 +0000 UTC m=+596.477864918" lastFinishedPulling="2026-04-16 18:40:12.795437734 +0000 UTC m=+596.773521158" observedRunningTime="2026-04-16 18:40:13.300069391 +0000 UTC m=+597.278152837" watchObservedRunningTime="2026-04-16 18:40:13.300993206 +0000 UTC m=+597.279076652" Apr 16 18:40:14.772108 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:14.772067 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve/seaweedfs-tls-custom-ddd4dbfd-fzlc4"] Apr 16 18:40:15.291608 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:15.291571 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve/seaweedfs-tls-custom-ddd4dbfd-fzlc4" podUID="140b250b-9c29-4189-86ad-d71da2a3c6db" containerName="seaweedfs-tls-custom" containerID="cri-o://a434ba7c943affa91bf058ba1d0293f9e0ce90150aff7d79401d0f6c9d83c5db" gracePeriod=30 Apr 16 18:40:16.537634 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:16.537611 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/seaweedfs-tls-custom-ddd4dbfd-fzlc4" Apr 16 18:40:16.605931 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:16.605857 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/140b250b-9c29-4189-86ad-d71da2a3c6db-data\") pod \"140b250b-9c29-4189-86ad-d71da2a3c6db\" (UID: \"140b250b-9c29-4189-86ad-d71da2a3c6db\") " Apr 16 18:40:16.605931 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:16.605906 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slhxw\" (UniqueName: \"kubernetes.io/projected/140b250b-9c29-4189-86ad-d71da2a3c6db-kube-api-access-slhxw\") pod \"140b250b-9c29-4189-86ad-d71da2a3c6db\" (UID: \"140b250b-9c29-4189-86ad-d71da2a3c6db\") " Apr 16 18:40:16.606966 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:16.606936 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/140b250b-9c29-4189-86ad-d71da2a3c6db-data" (OuterVolumeSpecName: "data") pod "140b250b-9c29-4189-86ad-d71da2a3c6db" (UID: "140b250b-9c29-4189-86ad-d71da2a3c6db"). InnerVolumeSpecName "data". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 18:40:16.607845 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:16.607822 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/140b250b-9c29-4189-86ad-d71da2a3c6db-kube-api-access-slhxw" (OuterVolumeSpecName: "kube-api-access-slhxw") pod "140b250b-9c29-4189-86ad-d71da2a3c6db" (UID: "140b250b-9c29-4189-86ad-d71da2a3c6db"). InnerVolumeSpecName "kube-api-access-slhxw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 18:40:16.706361 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:16.706321 2569 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/140b250b-9c29-4189-86ad-d71da2a3c6db-data\") on node \"ip-10-0-132-14.ec2.internal\" DevicePath \"\"" Apr 16 18:40:16.706361 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:16.706358 2569 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-slhxw\" (UniqueName: \"kubernetes.io/projected/140b250b-9c29-4189-86ad-d71da2a3c6db-kube-api-access-slhxw\") on node \"ip-10-0-132-14.ec2.internal\" DevicePath \"\"" Apr 16 18:40:17.298015 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:17.297975 2569 generic.go:358] "Generic (PLEG): container finished" podID="140b250b-9c29-4189-86ad-d71da2a3c6db" containerID="a434ba7c943affa91bf058ba1d0293f9e0ce90150aff7d79401d0f6c9d83c5db" exitCode=0 Apr 16 18:40:17.298205 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:17.298023 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/seaweedfs-tls-custom-ddd4dbfd-fzlc4" event={"ID":"140b250b-9c29-4189-86ad-d71da2a3c6db","Type":"ContainerDied","Data":"a434ba7c943affa91bf058ba1d0293f9e0ce90150aff7d79401d0f6c9d83c5db"} Apr 16 18:40:17.298205 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:17.298038 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/seaweedfs-tls-custom-ddd4dbfd-fzlc4" Apr 16 18:40:17.298205 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:17.298049 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/seaweedfs-tls-custom-ddd4dbfd-fzlc4" event={"ID":"140b250b-9c29-4189-86ad-d71da2a3c6db","Type":"ContainerDied","Data":"3c7772cc16fd7d03939b79bbc2d1e996a8b02e7e3da542fa03ae44bf5339d41d"} Apr 16 18:40:17.298205 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:17.298068 2569 scope.go:117] "RemoveContainer" containerID="a434ba7c943affa91bf058ba1d0293f9e0ce90150aff7d79401d0f6c9d83c5db" Apr 16 18:40:17.306576 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:17.306557 2569 scope.go:117] "RemoveContainer" containerID="a434ba7c943affa91bf058ba1d0293f9e0ce90150aff7d79401d0f6c9d83c5db" Apr 16 18:40:17.306808 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:40:17.306789 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a434ba7c943affa91bf058ba1d0293f9e0ce90150aff7d79401d0f6c9d83c5db\": container with ID starting with a434ba7c943affa91bf058ba1d0293f9e0ce90150aff7d79401d0f6c9d83c5db not found: ID does not exist" containerID="a434ba7c943affa91bf058ba1d0293f9e0ce90150aff7d79401d0f6c9d83c5db" Apr 16 18:40:17.306857 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:17.306817 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a434ba7c943affa91bf058ba1d0293f9e0ce90150aff7d79401d0f6c9d83c5db"} err="failed to get container status \"a434ba7c943affa91bf058ba1d0293f9e0ce90150aff7d79401d0f6c9d83c5db\": rpc error: code = NotFound desc = could not find container \"a434ba7c943affa91bf058ba1d0293f9e0ce90150aff7d79401d0f6c9d83c5db\": container with ID starting with a434ba7c943affa91bf058ba1d0293f9e0ce90150aff7d79401d0f6c9d83c5db not found: ID does not exist" Apr 16 18:40:17.310857 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:17.310836 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve/seaweedfs-tls-custom-ddd4dbfd-fzlc4"] Apr 16 18:40:17.312953 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:17.312933 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve/seaweedfs-tls-custom-ddd4dbfd-fzlc4"] Apr 16 18:40:17.339396 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:17.339375 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/seaweedfs-tls-custom-5c88b85bb7-vdvnz"] Apr 16 18:40:17.339601 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:17.339591 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="140b250b-9c29-4189-86ad-d71da2a3c6db" containerName="seaweedfs-tls-custom" Apr 16 18:40:17.339641 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:17.339604 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="140b250b-9c29-4189-86ad-d71da2a3c6db" containerName="seaweedfs-tls-custom" Apr 16 18:40:17.339677 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:17.339670 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="140b250b-9c29-4189-86ad-d71da2a3c6db" containerName="seaweedfs-tls-custom" Apr 16 18:40:17.342521 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:17.342507 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/seaweedfs-tls-custom-5c88b85bb7-vdvnz" Apr 16 18:40:17.344252 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:17.344235 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"seaweedfs-tls-custom-artifact\"" Apr 16 18:40:17.344325 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:17.344235 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"seaweedfs-tls-custom\"" Apr 16 18:40:17.349094 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:17.349072 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/seaweedfs-tls-custom-5c88b85bb7-vdvnz"] Apr 16 18:40:17.410874 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:17.410843 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"seaweedfs-tls-custom\" (UniqueName: \"kubernetes.io/projected/5b4dc11a-3fab-4a6b-9deb-9bf0c61f6370-seaweedfs-tls-custom\") pod \"seaweedfs-tls-custom-5c88b85bb7-vdvnz\" (UID: \"5b4dc11a-3fab-4a6b-9deb-9bf0c61f6370\") " pod="kserve/seaweedfs-tls-custom-5c88b85bb7-vdvnz" Apr 16 18:40:17.410992 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:17.410893 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/5b4dc11a-3fab-4a6b-9deb-9bf0c61f6370-data\") pod \"seaweedfs-tls-custom-5c88b85bb7-vdvnz\" (UID: \"5b4dc11a-3fab-4a6b-9deb-9bf0c61f6370\") " pod="kserve/seaweedfs-tls-custom-5c88b85bb7-vdvnz" Apr 16 18:40:17.410992 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:17.410960 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt8hp\" (UniqueName: \"kubernetes.io/projected/5b4dc11a-3fab-4a6b-9deb-9bf0c61f6370-kube-api-access-mt8hp\") pod \"seaweedfs-tls-custom-5c88b85bb7-vdvnz\" (UID: \"5b4dc11a-3fab-4a6b-9deb-9bf0c61f6370\") " pod="kserve/seaweedfs-tls-custom-5c88b85bb7-vdvnz" Apr 16 18:40:17.512098 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:17.512072 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"seaweedfs-tls-custom\" (UniqueName: \"kubernetes.io/projected/5b4dc11a-3fab-4a6b-9deb-9bf0c61f6370-seaweedfs-tls-custom\") pod \"seaweedfs-tls-custom-5c88b85bb7-vdvnz\" (UID: \"5b4dc11a-3fab-4a6b-9deb-9bf0c61f6370\") " pod="kserve/seaweedfs-tls-custom-5c88b85bb7-vdvnz" Apr 16 18:40:17.512213 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:17.512121 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/5b4dc11a-3fab-4a6b-9deb-9bf0c61f6370-data\") pod \"seaweedfs-tls-custom-5c88b85bb7-vdvnz\" (UID: \"5b4dc11a-3fab-4a6b-9deb-9bf0c61f6370\") " pod="kserve/seaweedfs-tls-custom-5c88b85bb7-vdvnz" Apr 16 18:40:17.512213 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:17.512153 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mt8hp\" (UniqueName: \"kubernetes.io/projected/5b4dc11a-3fab-4a6b-9deb-9bf0c61f6370-kube-api-access-mt8hp\") pod \"seaweedfs-tls-custom-5c88b85bb7-vdvnz\" (UID: \"5b4dc11a-3fab-4a6b-9deb-9bf0c61f6370\") " pod="kserve/seaweedfs-tls-custom-5c88b85bb7-vdvnz" Apr 16 18:40:17.512548 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:17.512528 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/5b4dc11a-3fab-4a6b-9deb-9bf0c61f6370-data\") pod \"seaweedfs-tls-custom-5c88b85bb7-vdvnz\" (UID: \"5b4dc11a-3fab-4a6b-9deb-9bf0c61f6370\") " pod="kserve/seaweedfs-tls-custom-5c88b85bb7-vdvnz" Apr 16 18:40:17.514519 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:17.514490 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"seaweedfs-tls-custom\" (UniqueName: \"kubernetes.io/projected/5b4dc11a-3fab-4a6b-9deb-9bf0c61f6370-seaweedfs-tls-custom\") pod \"seaweedfs-tls-custom-5c88b85bb7-vdvnz\" (UID: \"5b4dc11a-3fab-4a6b-9deb-9bf0c61f6370\") " pod="kserve/seaweedfs-tls-custom-5c88b85bb7-vdvnz" Apr 16 18:40:17.519397 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:17.519376 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt8hp\" (UniqueName: \"kubernetes.io/projected/5b4dc11a-3fab-4a6b-9deb-9bf0c61f6370-kube-api-access-mt8hp\") pod \"seaweedfs-tls-custom-5c88b85bb7-vdvnz\" (UID: \"5b4dc11a-3fab-4a6b-9deb-9bf0c61f6370\") " pod="kserve/seaweedfs-tls-custom-5c88b85bb7-vdvnz" Apr 16 18:40:17.651349 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:17.651239 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/seaweedfs-tls-custom-5c88b85bb7-vdvnz" Apr 16 18:40:17.769104 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:17.769072 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/seaweedfs-tls-custom-5c88b85bb7-vdvnz"] Apr 16 18:40:17.772155 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:40:17.772120 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b4dc11a_3fab_4a6b_9deb_9bf0c61f6370.slice/crio-5cff2c2ba41085ee35b77ba18b9787a4ad67eda0b558e4c0f2ba8c53f3a9e9ef WatchSource:0}: Error finding container 5cff2c2ba41085ee35b77ba18b9787a4ad67eda0b558e4c0f2ba8c53f3a9e9ef: Status 404 returned error can't find the container with id 5cff2c2ba41085ee35b77ba18b9787a4ad67eda0b558e4c0f2ba8c53f3a9e9ef Apr 16 18:40:18.302929 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:18.302892 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/seaweedfs-tls-custom-5c88b85bb7-vdvnz" event={"ID":"5b4dc11a-3fab-4a6b-9deb-9bf0c61f6370","Type":"ContainerStarted","Data":"2e8cd6b27556c90fc11ca96ceb316ecafa2fce25d633cf4da9dc3c505da1ed8f"} Apr 16 18:40:18.302929 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:18.302933 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/seaweedfs-tls-custom-5c88b85bb7-vdvnz" event={"ID":"5b4dc11a-3fab-4a6b-9deb-9bf0c61f6370","Type":"ContainerStarted","Data":"5cff2c2ba41085ee35b77ba18b9787a4ad67eda0b558e4c0f2ba8c53f3a9e9ef"} Apr 16 18:40:18.316921 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:18.316880 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/seaweedfs-tls-custom-5c88b85bb7-vdvnz" podStartSLOduration=0.964479695 podStartE2EDuration="1.316867134s" podCreationTimestamp="2026-04-16 18:40:17 +0000 UTC" firstStartedPulling="2026-04-16 18:40:17.77346894 +0000 UTC m=+601.751552363" lastFinishedPulling="2026-04-16 18:40:18.125856376 +0000 UTC m=+602.103939802" observedRunningTime="2026-04-16 18:40:18.316538249 +0000 UTC m=+602.294621694" watchObservedRunningTime="2026-04-16 18:40:18.316867134 +0000 UTC m=+602.294950576" Apr 16 18:40:18.608738 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:18.608708 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/s3-tls-init-custom-8qnld"] Apr 16 18:40:18.611691 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:18.611674 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/s3-tls-init-custom-8qnld" Apr 16 18:40:18.621431 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:18.617575 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/s3-tls-init-custom-8qnld"] Apr 16 18:40:18.622356 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:18.622303 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bbvw\" (UniqueName: \"kubernetes.io/projected/5935f990-cf29-4b33-a91d-2dbfbd69678b-kube-api-access-9bbvw\") pod \"s3-tls-init-custom-8qnld\" (UID: \"5935f990-cf29-4b33-a91d-2dbfbd69678b\") " pod="kserve/s3-tls-init-custom-8qnld" Apr 16 18:40:18.647508 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:18.647469 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="140b250b-9c29-4189-86ad-d71da2a3c6db" path="/var/lib/kubelet/pods/140b250b-9c29-4189-86ad-d71da2a3c6db/volumes" Apr 16 18:40:18.722876 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:18.722842 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9bbvw\" (UniqueName: \"kubernetes.io/projected/5935f990-cf29-4b33-a91d-2dbfbd69678b-kube-api-access-9bbvw\") pod \"s3-tls-init-custom-8qnld\" (UID: \"5935f990-cf29-4b33-a91d-2dbfbd69678b\") " pod="kserve/s3-tls-init-custom-8qnld" Apr 16 18:40:18.730354 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:18.730320 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bbvw\" (UniqueName: \"kubernetes.io/projected/5935f990-cf29-4b33-a91d-2dbfbd69678b-kube-api-access-9bbvw\") pod \"s3-tls-init-custom-8qnld\" (UID: \"5935f990-cf29-4b33-a91d-2dbfbd69678b\") " pod="kserve/s3-tls-init-custom-8qnld" Apr 16 18:40:18.926294 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:18.926217 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/s3-tls-init-custom-8qnld" Apr 16 18:40:19.036150 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:19.036130 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/s3-tls-init-custom-8qnld"] Apr 16 18:40:19.038161 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:40:19.038136 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5935f990_cf29_4b33_a91d_2dbfbd69678b.slice/crio-9201b04ed230779508905c63bbe4b1f81f6a3d6f8ea85102866f5761cb3b0e8e WatchSource:0}: Error finding container 9201b04ed230779508905c63bbe4b1f81f6a3d6f8ea85102866f5761cb3b0e8e: Status 404 returned error can't find the container with id 9201b04ed230779508905c63bbe4b1f81f6a3d6f8ea85102866f5761cb3b0e8e Apr 16 18:40:19.307072 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:19.307033 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/s3-tls-init-custom-8qnld" event={"ID":"5935f990-cf29-4b33-a91d-2dbfbd69678b","Type":"ContainerStarted","Data":"4d3825e795b386b5ea2839a9973cc42451213ce37231f5b825534f946dd42a96"} Apr 16 18:40:19.307072 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:19.307078 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/s3-tls-init-custom-8qnld" event={"ID":"5935f990-cf29-4b33-a91d-2dbfbd69678b","Type":"ContainerStarted","Data":"9201b04ed230779508905c63bbe4b1f81f6a3d6f8ea85102866f5761cb3b0e8e"} Apr 16 18:40:19.321364 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:19.321307 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/s3-tls-init-custom-8qnld" podStartSLOduration=1.321293253 podStartE2EDuration="1.321293253s" podCreationTimestamp="2026-04-16 18:40:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:40:19.31976139 +0000 UTC m=+603.297844834" watchObservedRunningTime="2026-04-16 18:40:19.321293253 +0000 UTC m=+603.299376698" Apr 16 18:40:24.325234 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:24.325196 2569 generic.go:358] "Generic (PLEG): container finished" podID="5935f990-cf29-4b33-a91d-2dbfbd69678b" containerID="4d3825e795b386b5ea2839a9973cc42451213ce37231f5b825534f946dd42a96" exitCode=0 Apr 16 18:40:24.325642 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:24.325262 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/s3-tls-init-custom-8qnld" event={"ID":"5935f990-cf29-4b33-a91d-2dbfbd69678b","Type":"ContainerDied","Data":"4d3825e795b386b5ea2839a9973cc42451213ce37231f5b825534f946dd42a96"} Apr 16 18:40:25.460538 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:25.460517 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/s3-tls-init-custom-8qnld" Apr 16 18:40:25.477581 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:25.477557 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bbvw\" (UniqueName: \"kubernetes.io/projected/5935f990-cf29-4b33-a91d-2dbfbd69678b-kube-api-access-9bbvw\") pod \"5935f990-cf29-4b33-a91d-2dbfbd69678b\" (UID: \"5935f990-cf29-4b33-a91d-2dbfbd69678b\") " Apr 16 18:40:25.479616 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:25.479591 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5935f990-cf29-4b33-a91d-2dbfbd69678b-kube-api-access-9bbvw" (OuterVolumeSpecName: "kube-api-access-9bbvw") pod "5935f990-cf29-4b33-a91d-2dbfbd69678b" (UID: "5935f990-cf29-4b33-a91d-2dbfbd69678b"). InnerVolumeSpecName "kube-api-access-9bbvw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 18:40:25.578145 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:25.578108 2569 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9bbvw\" (UniqueName: \"kubernetes.io/projected/5935f990-cf29-4b33-a91d-2dbfbd69678b-kube-api-access-9bbvw\") on node \"ip-10-0-132-14.ec2.internal\" DevicePath \"\"" Apr 16 18:40:26.332796 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:26.332763 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/s3-tls-init-custom-8qnld" event={"ID":"5935f990-cf29-4b33-a91d-2dbfbd69678b","Type":"ContainerDied","Data":"9201b04ed230779508905c63bbe4b1f81f6a3d6f8ea85102866f5761cb3b0e8e"} Apr 16 18:40:26.332796 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:26.332785 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/s3-tls-init-custom-8qnld" Apr 16 18:40:26.332796 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:26.332794 2569 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9201b04ed230779508905c63bbe4b1f81f6a3d6f8ea85102866f5761cb3b0e8e" Apr 16 18:40:26.908272 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:26.908242 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/seaweedfs-tls-serving-7fd5766db9-cl5fg"] Apr 16 18:40:26.908705 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:26.908506 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5935f990-cf29-4b33-a91d-2dbfbd69678b" containerName="s3-tls-init-custom" Apr 16 18:40:26.908705 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:26.908517 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="5935f990-cf29-4b33-a91d-2dbfbd69678b" containerName="s3-tls-init-custom" Apr 16 18:40:26.908705 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:26.908562 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="5935f990-cf29-4b33-a91d-2dbfbd69678b" containerName="s3-tls-init-custom" Apr 16 18:40:26.925048 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:26.925019 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/seaweedfs-tls-serving-7fd5766db9-cl5fg"] Apr 16 18:40:26.925183 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:26.925110 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/seaweedfs-tls-serving-7fd5766db9-cl5fg" Apr 16 18:40:26.927110 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:26.927088 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"seaweedfs-tls-serving-artifact\"" Apr 16 18:40:26.927218 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:26.927202 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"seaweedfs-tls-serving\"" Apr 16 18:40:26.988602 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:26.988568 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5k8j\" (UniqueName: \"kubernetes.io/projected/018134dd-e15a-4828-871f-992e5cd0ac85-kube-api-access-w5k8j\") pod \"seaweedfs-tls-serving-7fd5766db9-cl5fg\" (UID: \"018134dd-e15a-4828-871f-992e5cd0ac85\") " pod="kserve/seaweedfs-tls-serving-7fd5766db9-cl5fg" Apr 16 18:40:26.988602 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:26.988604 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/018134dd-e15a-4828-871f-992e5cd0ac85-data\") pod \"seaweedfs-tls-serving-7fd5766db9-cl5fg\" (UID: \"018134dd-e15a-4828-871f-992e5cd0ac85\") " pod="kserve/seaweedfs-tls-serving-7fd5766db9-cl5fg" Apr 16 18:40:26.988814 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:26.988625 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"seaweedfs-tls-serving\" (UniqueName: \"kubernetes.io/projected/018134dd-e15a-4828-871f-992e5cd0ac85-seaweedfs-tls-serving\") pod \"seaweedfs-tls-serving-7fd5766db9-cl5fg\" (UID: \"018134dd-e15a-4828-871f-992e5cd0ac85\") " pod="kserve/seaweedfs-tls-serving-7fd5766db9-cl5fg" Apr 16 18:40:27.089348 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:27.089310 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w5k8j\" (UniqueName: \"kubernetes.io/projected/018134dd-e15a-4828-871f-992e5cd0ac85-kube-api-access-w5k8j\") pod \"seaweedfs-tls-serving-7fd5766db9-cl5fg\" (UID: \"018134dd-e15a-4828-871f-992e5cd0ac85\") " pod="kserve/seaweedfs-tls-serving-7fd5766db9-cl5fg" Apr 16 18:40:27.089474 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:27.089366 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/018134dd-e15a-4828-871f-992e5cd0ac85-data\") pod \"seaweedfs-tls-serving-7fd5766db9-cl5fg\" (UID: \"018134dd-e15a-4828-871f-992e5cd0ac85\") " pod="kserve/seaweedfs-tls-serving-7fd5766db9-cl5fg" Apr 16 18:40:27.089474 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:27.089393 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"seaweedfs-tls-serving\" (UniqueName: \"kubernetes.io/projected/018134dd-e15a-4828-871f-992e5cd0ac85-seaweedfs-tls-serving\") pod \"seaweedfs-tls-serving-7fd5766db9-cl5fg\" (UID: \"018134dd-e15a-4828-871f-992e5cd0ac85\") " pod="kserve/seaweedfs-tls-serving-7fd5766db9-cl5fg" Apr 16 18:40:27.089534 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:40:27.089499 2569 projected.go:264] Couldn't get secret kserve/seaweedfs-tls-serving: secret "seaweedfs-tls-serving" not found Apr 16 18:40:27.089534 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:40:27.089517 2569 projected.go:194] Error preparing data for projected volume seaweedfs-tls-serving for pod kserve/seaweedfs-tls-serving-7fd5766db9-cl5fg: secret "seaweedfs-tls-serving" not found Apr 16 18:40:27.089611 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:40:27.089590 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/018134dd-e15a-4828-871f-992e5cd0ac85-seaweedfs-tls-serving podName:018134dd-e15a-4828-871f-992e5cd0ac85 nodeName:}" failed. No retries permitted until 2026-04-16 18:40:27.589569984 +0000 UTC m=+611.567653407 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "seaweedfs-tls-serving" (UniqueName: "kubernetes.io/projected/018134dd-e15a-4828-871f-992e5cd0ac85-seaweedfs-tls-serving") pod "seaweedfs-tls-serving-7fd5766db9-cl5fg" (UID: "018134dd-e15a-4828-871f-992e5cd0ac85") : secret "seaweedfs-tls-serving" not found Apr 16 18:40:27.089835 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:27.089814 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/018134dd-e15a-4828-871f-992e5cd0ac85-data\") pod \"seaweedfs-tls-serving-7fd5766db9-cl5fg\" (UID: \"018134dd-e15a-4828-871f-992e5cd0ac85\") " pod="kserve/seaweedfs-tls-serving-7fd5766db9-cl5fg" Apr 16 18:40:27.097573 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:27.097546 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5k8j\" (UniqueName: \"kubernetes.io/projected/018134dd-e15a-4828-871f-992e5cd0ac85-kube-api-access-w5k8j\") pod \"seaweedfs-tls-serving-7fd5766db9-cl5fg\" (UID: \"018134dd-e15a-4828-871f-992e5cd0ac85\") " pod="kserve/seaweedfs-tls-serving-7fd5766db9-cl5fg" Apr 16 18:40:27.592886 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:27.592825 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"seaweedfs-tls-serving\" (UniqueName: \"kubernetes.io/projected/018134dd-e15a-4828-871f-992e5cd0ac85-seaweedfs-tls-serving\") pod \"seaweedfs-tls-serving-7fd5766db9-cl5fg\" (UID: \"018134dd-e15a-4828-871f-992e5cd0ac85\") " pod="kserve/seaweedfs-tls-serving-7fd5766db9-cl5fg" Apr 16 18:40:27.595206 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:27.595186 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"seaweedfs-tls-serving\" (UniqueName: \"kubernetes.io/projected/018134dd-e15a-4828-871f-992e5cd0ac85-seaweedfs-tls-serving\") pod \"seaweedfs-tls-serving-7fd5766db9-cl5fg\" (UID: \"018134dd-e15a-4828-871f-992e5cd0ac85\") " pod="kserve/seaweedfs-tls-serving-7fd5766db9-cl5fg" Apr 16 18:40:27.834197 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:27.834154 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/seaweedfs-tls-serving-7fd5766db9-cl5fg" Apr 16 18:40:27.952937 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:27.952905 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/seaweedfs-tls-serving-7fd5766db9-cl5fg"] Apr 16 18:40:27.956022 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:40:27.955996 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod018134dd_e15a_4828_871f_992e5cd0ac85.slice/crio-15725cabdc5d4efbfd88aabb3745a0696aaea0e6b6c1f6d3a558cc5300cca96a WatchSource:0}: Error finding container 15725cabdc5d4efbfd88aabb3745a0696aaea0e6b6c1f6d3a558cc5300cca96a: Status 404 returned error can't find the container with id 15725cabdc5d4efbfd88aabb3745a0696aaea0e6b6c1f6d3a558cc5300cca96a Apr 16 18:40:28.340550 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:28.340516 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/seaweedfs-tls-serving-7fd5766db9-cl5fg" event={"ID":"018134dd-e15a-4828-871f-992e5cd0ac85","Type":"ContainerStarted","Data":"7bd5edc65c611b2f6ccd48da48161ddf1c918a7906ab8ac7c0d32f776709d7ff"} Apr 16 18:40:28.340550 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:28.340552 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/seaweedfs-tls-serving-7fd5766db9-cl5fg" event={"ID":"018134dd-e15a-4828-871f-992e5cd0ac85","Type":"ContainerStarted","Data":"15725cabdc5d4efbfd88aabb3745a0696aaea0e6b6c1f6d3a558cc5300cca96a"} Apr 16 18:40:28.354532 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:28.354484 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/seaweedfs-tls-serving-7fd5766db9-cl5fg" podStartSLOduration=2.114264132 podStartE2EDuration="2.354469714s" podCreationTimestamp="2026-04-16 18:40:26 +0000 UTC" firstStartedPulling="2026-04-16 18:40:27.957621346 +0000 UTC m=+611.935704769" lastFinishedPulling="2026-04-16 18:40:28.197826924 +0000 UTC m=+612.175910351" observedRunningTime="2026-04-16 18:40:28.353741808 +0000 UTC m=+612.331825252" watchObservedRunningTime="2026-04-16 18:40:28.354469714 +0000 UTC m=+612.332553161" Apr 16 18:40:28.870064 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:28.870026 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/s3-tls-init-serving-84sj7"] Apr 16 18:40:28.873001 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:28.872981 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/s3-tls-init-serving-84sj7" Apr 16 18:40:28.880034 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:28.880012 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/s3-tls-init-serving-84sj7"] Apr 16 18:40:28.903265 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:28.903241 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqhnm\" (UniqueName: \"kubernetes.io/projected/f8db5af2-26ab-4f7b-8609-80b4afc589c7-kube-api-access-sqhnm\") pod \"s3-tls-init-serving-84sj7\" (UID: \"f8db5af2-26ab-4f7b-8609-80b4afc589c7\") " pod="kserve/s3-tls-init-serving-84sj7" Apr 16 18:40:29.004419 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:29.004374 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sqhnm\" (UniqueName: \"kubernetes.io/projected/f8db5af2-26ab-4f7b-8609-80b4afc589c7-kube-api-access-sqhnm\") pod \"s3-tls-init-serving-84sj7\" (UID: \"f8db5af2-26ab-4f7b-8609-80b4afc589c7\") " pod="kserve/s3-tls-init-serving-84sj7" Apr 16 18:40:29.011518 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:29.011488 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqhnm\" (UniqueName: \"kubernetes.io/projected/f8db5af2-26ab-4f7b-8609-80b4afc589c7-kube-api-access-sqhnm\") pod \"s3-tls-init-serving-84sj7\" (UID: \"f8db5af2-26ab-4f7b-8609-80b4afc589c7\") " pod="kserve/s3-tls-init-serving-84sj7" Apr 16 18:40:29.181868 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:29.181774 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/s3-tls-init-serving-84sj7" Apr 16 18:40:29.293574 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:29.293540 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/s3-tls-init-serving-84sj7"] Apr 16 18:40:29.296428 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:40:29.296399 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8db5af2_26ab_4f7b_8609_80b4afc589c7.slice/crio-a415f03e972b728933bf27862f0666be119c2ac33923f77fea6272c1219c75a9 WatchSource:0}: Error finding container a415f03e972b728933bf27862f0666be119c2ac33923f77fea6272c1219c75a9: Status 404 returned error can't find the container with id a415f03e972b728933bf27862f0666be119c2ac33923f77fea6272c1219c75a9 Apr 16 18:40:29.345391 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:29.345366 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/s3-tls-init-serving-84sj7" event={"ID":"f8db5af2-26ab-4f7b-8609-80b4afc589c7","Type":"ContainerStarted","Data":"a415f03e972b728933bf27862f0666be119c2ac33923f77fea6272c1219c75a9"} Apr 16 18:40:30.350291 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:30.350256 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/s3-tls-init-serving-84sj7" event={"ID":"f8db5af2-26ab-4f7b-8609-80b4afc589c7","Type":"ContainerStarted","Data":"41b11ae5820f0d882297aeab08e563eceac44ced9772d43ec5b3834fc9743a55"} Apr 16 18:40:30.363170 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:30.363129 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/s3-tls-init-serving-84sj7" podStartSLOduration=2.363113152 podStartE2EDuration="2.363113152s" podCreationTimestamp="2026-04-16 18:40:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:40:30.362202231 +0000 UTC m=+614.340285655" watchObservedRunningTime="2026-04-16 18:40:30.363113152 +0000 UTC m=+614.341196600" Apr 16 18:40:34.363555 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:34.363523 2569 generic.go:358] "Generic (PLEG): container finished" podID="f8db5af2-26ab-4f7b-8609-80b4afc589c7" containerID="41b11ae5820f0d882297aeab08e563eceac44ced9772d43ec5b3834fc9743a55" exitCode=0 Apr 16 18:40:34.363951 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:34.363563 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/s3-tls-init-serving-84sj7" event={"ID":"f8db5af2-26ab-4f7b-8609-80b4afc589c7","Type":"ContainerDied","Data":"41b11ae5820f0d882297aeab08e563eceac44ced9772d43ec5b3834fc9743a55"} Apr 16 18:40:35.501920 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:35.501900 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/s3-tls-init-serving-84sj7" Apr 16 18:40:35.555509 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:35.555481 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqhnm\" (UniqueName: \"kubernetes.io/projected/f8db5af2-26ab-4f7b-8609-80b4afc589c7-kube-api-access-sqhnm\") pod \"f8db5af2-26ab-4f7b-8609-80b4afc589c7\" (UID: \"f8db5af2-26ab-4f7b-8609-80b4afc589c7\") " Apr 16 18:40:35.557421 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:35.557394 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8db5af2-26ab-4f7b-8609-80b4afc589c7-kube-api-access-sqhnm" (OuterVolumeSpecName: "kube-api-access-sqhnm") pod "f8db5af2-26ab-4f7b-8609-80b4afc589c7" (UID: "f8db5af2-26ab-4f7b-8609-80b4afc589c7"). InnerVolumeSpecName "kube-api-access-sqhnm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 18:40:35.656847 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:35.656772 2569 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sqhnm\" (UniqueName: \"kubernetes.io/projected/f8db5af2-26ab-4f7b-8609-80b4afc589c7-kube-api-access-sqhnm\") on node \"ip-10-0-132-14.ec2.internal\" DevicePath \"\"" Apr 16 18:40:36.370558 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:36.370531 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/s3-tls-init-serving-84sj7" Apr 16 18:40:36.370733 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:36.370529 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/s3-tls-init-serving-84sj7" event={"ID":"f8db5af2-26ab-4f7b-8609-80b4afc589c7","Type":"ContainerDied","Data":"a415f03e972b728933bf27862f0666be119c2ac33923f77fea6272c1219c75a9"} Apr 16 18:40:36.370733 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:40:36.370641 2569 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a415f03e972b728933bf27862f0666be119c2ac33923f77fea6272c1219c75a9" Apr 16 18:43:46.214175 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:43:46.214130 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/message-dumper-predictor-79c7995f46-9lfrx"] Apr 16 18:43:46.214630 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:43:46.214586 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f8db5af2-26ab-4f7b-8609-80b4afc589c7" containerName="s3-tls-init-serving" Apr 16 18:43:46.214630 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:43:46.214607 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8db5af2-26ab-4f7b-8609-80b4afc589c7" containerName="s3-tls-init-serving" Apr 16 18:43:46.214711 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:43:46.214681 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="f8db5af2-26ab-4f7b-8609-80b4afc589c7" containerName="s3-tls-init-serving" Apr 16 18:43:46.217433 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:43:46.217418 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/message-dumper-predictor-79c7995f46-9lfrx" Apr 16 18:43:46.219419 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:43:46.219401 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"default-dockercfg-z66mq\"" Apr 16 18:43:46.224759 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:43:46.224734 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/message-dumper-predictor-79c7995f46-9lfrx"] Apr 16 18:43:46.227424 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:43:46.227409 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/message-dumper-predictor-79c7995f46-9lfrx" Apr 16 18:43:46.343717 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:43:46.343693 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/message-dumper-predictor-79c7995f46-9lfrx"] Apr 16 18:43:46.346360 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:43:46.346316 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97ee1799_d86c_4bea_9942_6e23b76a65ec.slice/crio-fd3468bdb487835592d356d8655d453131b6f79ccbf6ad602b8a97ad5dab94fe WatchSource:0}: Error finding container fd3468bdb487835592d356d8655d453131b6f79ccbf6ad602b8a97ad5dab94fe: Status 404 returned error can't find the container with id fd3468bdb487835592d356d8655d453131b6f79ccbf6ad602b8a97ad5dab94fe Apr 16 18:43:46.348034 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:43:46.348018 2569 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 18:43:46.886983 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:43:46.886952 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/message-dumper-predictor-79c7995f46-9lfrx" event={"ID":"97ee1799-d86c-4bea-9942-6e23b76a65ec","Type":"ContainerStarted","Data":"fd3468bdb487835592d356d8655d453131b6f79ccbf6ad602b8a97ad5dab94fe"} Apr 16 18:43:47.891318 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:43:47.891284 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/message-dumper-predictor-79c7995f46-9lfrx" event={"ID":"97ee1799-d86c-4bea-9942-6e23b76a65ec","Type":"ContainerStarted","Data":"97c5192cbe775ce67c9e591094686b3b90216c959ea40d2aec4c3ea8ec17a268"} Apr 16 18:43:47.891798 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:43:47.891520 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/message-dumper-predictor-79c7995f46-9lfrx" Apr 16 18:43:47.892966 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:43:47.892949 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/message-dumper-predictor-79c7995f46-9lfrx" Apr 16 18:43:47.905608 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:43:47.905563 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/message-dumper-predictor-79c7995f46-9lfrx" podStartSLOduration=0.952672966 podStartE2EDuration="1.905549485s" podCreationTimestamp="2026-04-16 18:43:46 +0000 UTC" firstStartedPulling="2026-04-16 18:43:46.348141165 +0000 UTC m=+810.326224588" lastFinishedPulling="2026-04-16 18:43:47.301017668 +0000 UTC m=+811.279101107" observedRunningTime="2026-04-16 18:43:47.904254315 +0000 UTC m=+811.882337772" watchObservedRunningTime="2026-04-16 18:43:47.905549485 +0000 UTC m=+811.883632929" Apr 16 18:45:21.314329 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:45:21.314295 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_message-dumper-predictor-79c7995f46-9lfrx_97ee1799-d86c-4bea-9942-6e23b76a65ec/kserve-container/0.log" Apr 16 18:45:21.447888 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:45:21.447852 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/message-dumper-predictor-79c7995f46-9lfrx"] Apr 16 18:45:21.448168 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:45:21.448122 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/message-dumper-predictor-79c7995f46-9lfrx" podUID="97ee1799-d86c-4bea-9942-6e23b76a65ec" containerName="kserve-container" containerID="cri-o://97c5192cbe775ce67c9e591094686b3b90216c959ea40d2aec4c3ea8ec17a268" gracePeriod=30 Apr 16 18:45:21.675536 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:45:21.675515 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/message-dumper-predictor-79c7995f46-9lfrx" Apr 16 18:45:22.150854 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:45:22.150813 2569 generic.go:358] "Generic (PLEG): container finished" podID="97ee1799-d86c-4bea-9942-6e23b76a65ec" containerID="97c5192cbe775ce67c9e591094686b3b90216c959ea40d2aec4c3ea8ec17a268" exitCode=2 Apr 16 18:45:22.151103 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:45:22.150873 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/message-dumper-predictor-79c7995f46-9lfrx" Apr 16 18:45:22.151103 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:45:22.150874 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/message-dumper-predictor-79c7995f46-9lfrx" event={"ID":"97ee1799-d86c-4bea-9942-6e23b76a65ec","Type":"ContainerDied","Data":"97c5192cbe775ce67c9e591094686b3b90216c959ea40d2aec4c3ea8ec17a268"} Apr 16 18:45:22.151103 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:45:22.150972 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/message-dumper-predictor-79c7995f46-9lfrx" event={"ID":"97ee1799-d86c-4bea-9942-6e23b76a65ec","Type":"ContainerDied","Data":"fd3468bdb487835592d356d8655d453131b6f79ccbf6ad602b8a97ad5dab94fe"} Apr 16 18:45:22.151103 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:45:22.150987 2569 scope.go:117] "RemoveContainer" containerID="97c5192cbe775ce67c9e591094686b3b90216c959ea40d2aec4c3ea8ec17a268" Apr 16 18:45:22.158970 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:45:22.158952 2569 scope.go:117] "RemoveContainer" containerID="97c5192cbe775ce67c9e591094686b3b90216c959ea40d2aec4c3ea8ec17a268" Apr 16 18:45:22.159298 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:45:22.159212 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97c5192cbe775ce67c9e591094686b3b90216c959ea40d2aec4c3ea8ec17a268\": container with ID starting with 97c5192cbe775ce67c9e591094686b3b90216c959ea40d2aec4c3ea8ec17a268 not found: ID does not exist" containerID="97c5192cbe775ce67c9e591094686b3b90216c959ea40d2aec4c3ea8ec17a268" Apr 16 18:45:22.159298 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:45:22.159238 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97c5192cbe775ce67c9e591094686b3b90216c959ea40d2aec4c3ea8ec17a268"} err="failed to get container status \"97c5192cbe775ce67c9e591094686b3b90216c959ea40d2aec4c3ea8ec17a268\": rpc error: code = NotFound desc = could not find container \"97c5192cbe775ce67c9e591094686b3b90216c959ea40d2aec4c3ea8ec17a268\": container with ID starting with 97c5192cbe775ce67c9e591094686b3b90216c959ea40d2aec4c3ea8ec17a268 not found: ID does not exist" Apr 16 18:45:22.169301 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:45:22.169274 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/message-dumper-predictor-79c7995f46-9lfrx"] Apr 16 18:45:22.174699 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:45:22.174676 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/message-dumper-predictor-79c7995f46-9lfrx"] Apr 16 18:45:22.647455 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:45:22.647423 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97ee1799-d86c-4bea-9942-6e23b76a65ec" path="/var/lib/kubelet/pods/97ee1799-d86c-4bea-9942-6e23b76a65ec/volumes" Apr 16 18:55:26.211894 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:55:26.211855 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-paddle-v2-kserve-predictor-679d448945-vvlqr"] Apr 16 18:55:26.212401 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:55:26.212118 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="97ee1799-d86c-4bea-9942-6e23b76a65ec" containerName="kserve-container" Apr 16 18:55:26.212401 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:55:26.212128 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="97ee1799-d86c-4bea-9942-6e23b76a65ec" containerName="kserve-container" Apr 16 18:55:26.212401 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:55:26.212182 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="97ee1799-d86c-4bea-9942-6e23b76a65ec" containerName="kserve-container" Apr 16 18:55:26.214974 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:55:26.214958 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-paddle-v2-kserve-predictor-679d448945-vvlqr" Apr 16 18:55:26.216785 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:55:26.216764 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"default-dockercfg-z66mq\"" Apr 16 18:55:26.221657 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:55:26.221372 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-paddle-v2-kserve-predictor-679d448945-vvlqr"] Apr 16 18:55:26.398991 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:55:26.398949 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/a1fa46a3-264b-44e7-bd3c-b3f05284ef65-kserve-provision-location\") pod \"isvc-paddle-v2-kserve-predictor-679d448945-vvlqr\" (UID: \"a1fa46a3-264b-44e7-bd3c-b3f05284ef65\") " pod="kserve-ci-e2e-test/isvc-paddle-v2-kserve-predictor-679d448945-vvlqr" Apr 16 18:55:26.499569 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:55:26.499474 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/a1fa46a3-264b-44e7-bd3c-b3f05284ef65-kserve-provision-location\") pod \"isvc-paddle-v2-kserve-predictor-679d448945-vvlqr\" (UID: \"a1fa46a3-264b-44e7-bd3c-b3f05284ef65\") " pod="kserve-ci-e2e-test/isvc-paddle-v2-kserve-predictor-679d448945-vvlqr" Apr 16 18:55:26.499843 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:55:26.499821 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/a1fa46a3-264b-44e7-bd3c-b3f05284ef65-kserve-provision-location\") pod \"isvc-paddle-v2-kserve-predictor-679d448945-vvlqr\" (UID: \"a1fa46a3-264b-44e7-bd3c-b3f05284ef65\") " pod="kserve-ci-e2e-test/isvc-paddle-v2-kserve-predictor-679d448945-vvlqr" Apr 16 18:55:26.525424 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:55:26.525398 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-paddle-v2-kserve-predictor-679d448945-vvlqr" Apr 16 18:55:26.640774 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:55:26.640750 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-paddle-v2-kserve-predictor-679d448945-vvlqr"] Apr 16 18:55:26.643402 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:55:26.643366 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1fa46a3_264b_44e7_bd3c_b3f05284ef65.slice/crio-fe25447947703dc02f4b738e006bddaa77d6914951978eb555735cf0a554b3c1 WatchSource:0}: Error finding container fe25447947703dc02f4b738e006bddaa77d6914951978eb555735cf0a554b3c1: Status 404 returned error can't find the container with id fe25447947703dc02f4b738e006bddaa77d6914951978eb555735cf0a554b3c1 Apr 16 18:55:26.645509 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:55:26.645485 2569 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 18:55:26.768962 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:55:26.768863 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-paddle-v2-kserve-predictor-679d448945-vvlqr" event={"ID":"a1fa46a3-264b-44e7-bd3c-b3f05284ef65","Type":"ContainerStarted","Data":"fe25447947703dc02f4b738e006bddaa77d6914951978eb555735cf0a554b3c1"} Apr 16 18:55:30.785771 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:55:30.785730 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-paddle-v2-kserve-predictor-679d448945-vvlqr" event={"ID":"a1fa46a3-264b-44e7-bd3c-b3f05284ef65","Type":"ContainerStarted","Data":"8ecccd5ab5c95e85420f2104a3864db0a12cf39a851eab57acf7313dae8f8b93"} Apr 16 18:55:48.835879 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:55:48.835829 2569 generic.go:358] "Generic (PLEG): container finished" podID="a1fa46a3-264b-44e7-bd3c-b3f05284ef65" containerID="8ecccd5ab5c95e85420f2104a3864db0a12cf39a851eab57acf7313dae8f8b93" exitCode=0 Apr 16 18:55:48.836294 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:55:48.835903 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-paddle-v2-kserve-predictor-679d448945-vvlqr" event={"ID":"a1fa46a3-264b-44e7-bd3c-b3f05284ef65","Type":"ContainerDied","Data":"8ecccd5ab5c95e85420f2104a3864db0a12cf39a851eab57acf7313dae8f8b93"} Apr 16 18:55:59.875019 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:55:59.874976 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-paddle-v2-kserve-predictor-679d448945-vvlqr" event={"ID":"a1fa46a3-264b-44e7-bd3c-b3f05284ef65","Type":"ContainerStarted","Data":"f578ecae96c197679fec9fe1e52be49accde62b12b742f115997378b79db9f18"} Apr 16 18:55:59.875492 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:55:59.875284 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-paddle-v2-kserve-predictor-679d448945-vvlqr" Apr 16 18:55:59.876619 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:55:59.876592 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-paddle-v2-kserve-predictor-679d448945-vvlqr" podUID="a1fa46a3-264b-44e7-bd3c-b3f05284ef65" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.24:8080: connect: connection refused" Apr 16 18:56:00.878094 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:56:00.878056 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-paddle-v2-kserve-predictor-679d448945-vvlqr" podUID="a1fa46a3-264b-44e7-bd3c-b3f05284ef65" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.24:8080: connect: connection refused" Apr 16 18:56:10.878318 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:56:10.878273 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-paddle-v2-kserve-predictor-679d448945-vvlqr" podUID="a1fa46a3-264b-44e7-bd3c-b3f05284ef65" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.24:8080: connect: connection refused" Apr 16 18:56:20.878542 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:56:20.878497 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-paddle-v2-kserve-predictor-679d448945-vvlqr" podUID="a1fa46a3-264b-44e7-bd3c-b3f05284ef65" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.24:8080: connect: connection refused" Apr 16 18:56:30.878488 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:56:30.878446 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-paddle-v2-kserve-predictor-679d448945-vvlqr" podUID="a1fa46a3-264b-44e7-bd3c-b3f05284ef65" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.24:8080: connect: connection refused" Apr 16 18:56:40.878977 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:56:40.878924 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-paddle-v2-kserve-predictor-679d448945-vvlqr" podUID="a1fa46a3-264b-44e7-bd3c-b3f05284ef65" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.24:8080: connect: connection refused" Apr 16 18:56:50.879560 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:56:50.879519 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-paddle-v2-kserve-predictor-679d448945-vvlqr" Apr 16 18:56:50.894549 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:56:50.894493 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-paddle-v2-kserve-predictor-679d448945-vvlqr" podStartSLOduration=51.795735025 podStartE2EDuration="1m24.894478654s" podCreationTimestamp="2026-04-16 18:55:26 +0000 UTC" firstStartedPulling="2026-04-16 18:55:26.645661988 +0000 UTC m=+1510.623745418" lastFinishedPulling="2026-04-16 18:55:59.74440562 +0000 UTC m=+1543.722489047" observedRunningTime="2026-04-16 18:55:59.889191358 +0000 UTC m=+1543.867274804" watchObservedRunningTime="2026-04-16 18:56:50.894478654 +0000 UTC m=+1594.872562099" Apr 16 18:56:57.959040 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:56:57.959008 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-paddle-v2-kserve-predictor-679d448945-vvlqr"] Apr 16 18:56:57.959434 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:56:57.959275 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-paddle-v2-kserve-predictor-679d448945-vvlqr" podUID="a1fa46a3-264b-44e7-bd3c-b3f05284ef65" containerName="kserve-container" containerID="cri-o://f578ecae96c197679fec9fe1e52be49accde62b12b742f115997378b79db9f18" gracePeriod=30 Apr 16 18:56:58.052136 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:56:58.052104 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-pmml-predictor-89795c578-f6b5g"] Apr 16 18:56:58.055314 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:56:58.055298 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-pmml-predictor-89795c578-f6b5g" Apr 16 18:56:58.062261 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:56:58.062231 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-pmml-predictor-89795c578-f6b5g"] Apr 16 18:56:58.191552 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:56:58.191505 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/e7043542-9dd8-4266-8b75-19892dce1aa9-kserve-provision-location\") pod \"isvc-pmml-predictor-89795c578-f6b5g\" (UID: \"e7043542-9dd8-4266-8b75-19892dce1aa9\") " pod="kserve-ci-e2e-test/isvc-pmml-predictor-89795c578-f6b5g" Apr 16 18:56:58.292472 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:56:58.292388 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/e7043542-9dd8-4266-8b75-19892dce1aa9-kserve-provision-location\") pod \"isvc-pmml-predictor-89795c578-f6b5g\" (UID: \"e7043542-9dd8-4266-8b75-19892dce1aa9\") " pod="kserve-ci-e2e-test/isvc-pmml-predictor-89795c578-f6b5g" Apr 16 18:56:58.292766 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:56:58.292745 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/e7043542-9dd8-4266-8b75-19892dce1aa9-kserve-provision-location\") pod \"isvc-pmml-predictor-89795c578-f6b5g\" (UID: \"e7043542-9dd8-4266-8b75-19892dce1aa9\") " pod="kserve-ci-e2e-test/isvc-pmml-predictor-89795c578-f6b5g" Apr 16 18:56:58.366511 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:56:58.366485 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-pmml-predictor-89795c578-f6b5g" Apr 16 18:56:58.485114 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:56:58.485008 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-pmml-predictor-89795c578-f6b5g"] Apr 16 18:56:58.488506 ip-10-0-132-14 kubenswrapper[2569]: W0416 18:56:58.488469 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7043542_9dd8_4266_8b75_19892dce1aa9.slice/crio-df1870cfc33140d11b0455cb377d0b109ebe1f3a34074dc159e2ff341833d4bb WatchSource:0}: Error finding container df1870cfc33140d11b0455cb377d0b109ebe1f3a34074dc159e2ff341833d4bb: Status 404 returned error can't find the container with id df1870cfc33140d11b0455cb377d0b109ebe1f3a34074dc159e2ff341833d4bb Apr 16 18:56:59.039647 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:56:59.039611 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-pmml-predictor-89795c578-f6b5g" event={"ID":"e7043542-9dd8-4266-8b75-19892dce1aa9","Type":"ContainerStarted","Data":"92c879884a8c1f262b63ec0d44f87b31dd1774e118d52de88685993638ad05ef"} Apr 16 18:56:59.039647 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:56:59.039651 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-pmml-predictor-89795c578-f6b5g" event={"ID":"e7043542-9dd8-4266-8b75-19892dce1aa9","Type":"ContainerStarted","Data":"df1870cfc33140d11b0455cb377d0b109ebe1f3a34074dc159e2ff341833d4bb"} Apr 16 18:57:00.599884 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:57:00.599863 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-paddle-v2-kserve-predictor-679d448945-vvlqr" Apr 16 18:57:00.710652 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:57:00.710629 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/a1fa46a3-264b-44e7-bd3c-b3f05284ef65-kserve-provision-location\") pod \"a1fa46a3-264b-44e7-bd3c-b3f05284ef65\" (UID: \"a1fa46a3-264b-44e7-bd3c-b3f05284ef65\") " Apr 16 18:57:00.718696 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:57:00.718671 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1fa46a3-264b-44e7-bd3c-b3f05284ef65-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "a1fa46a3-264b-44e7-bd3c-b3f05284ef65" (UID: "a1fa46a3-264b-44e7-bd3c-b3f05284ef65"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 18:57:00.811756 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:57:00.811719 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/a1fa46a3-264b-44e7-bd3c-b3f05284ef65-kserve-provision-location\") on node \"ip-10-0-132-14.ec2.internal\" DevicePath \"\"" Apr 16 18:57:01.046872 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:57:01.046781 2569 generic.go:358] "Generic (PLEG): container finished" podID="a1fa46a3-264b-44e7-bd3c-b3f05284ef65" containerID="f578ecae96c197679fec9fe1e52be49accde62b12b742f115997378b79db9f18" exitCode=0 Apr 16 18:57:01.046872 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:57:01.046857 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-paddle-v2-kserve-predictor-679d448945-vvlqr" Apr 16 18:57:01.047088 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:57:01.046873 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-paddle-v2-kserve-predictor-679d448945-vvlqr" event={"ID":"a1fa46a3-264b-44e7-bd3c-b3f05284ef65","Type":"ContainerDied","Data":"f578ecae96c197679fec9fe1e52be49accde62b12b742f115997378b79db9f18"} Apr 16 18:57:01.047088 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:57:01.046919 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-paddle-v2-kserve-predictor-679d448945-vvlqr" event={"ID":"a1fa46a3-264b-44e7-bd3c-b3f05284ef65","Type":"ContainerDied","Data":"fe25447947703dc02f4b738e006bddaa77d6914951978eb555735cf0a554b3c1"} Apr 16 18:57:01.047088 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:57:01.046940 2569 scope.go:117] "RemoveContainer" containerID="f578ecae96c197679fec9fe1e52be49accde62b12b742f115997378b79db9f18" Apr 16 18:57:01.055137 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:57:01.055120 2569 scope.go:117] "RemoveContainer" containerID="8ecccd5ab5c95e85420f2104a3864db0a12cf39a851eab57acf7313dae8f8b93" Apr 16 18:57:01.061799 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:57:01.061784 2569 scope.go:117] "RemoveContainer" containerID="f578ecae96c197679fec9fe1e52be49accde62b12b742f115997378b79db9f18" Apr 16 18:57:01.062042 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:57:01.062022 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f578ecae96c197679fec9fe1e52be49accde62b12b742f115997378b79db9f18\": container with ID starting with f578ecae96c197679fec9fe1e52be49accde62b12b742f115997378b79db9f18 not found: ID does not exist" containerID="f578ecae96c197679fec9fe1e52be49accde62b12b742f115997378b79db9f18" Apr 16 18:57:01.062086 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:57:01.062052 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f578ecae96c197679fec9fe1e52be49accde62b12b742f115997378b79db9f18"} err="failed to get container status \"f578ecae96c197679fec9fe1e52be49accde62b12b742f115997378b79db9f18\": rpc error: code = NotFound desc = could not find container \"f578ecae96c197679fec9fe1e52be49accde62b12b742f115997378b79db9f18\": container with ID starting with f578ecae96c197679fec9fe1e52be49accde62b12b742f115997378b79db9f18 not found: ID does not exist" Apr 16 18:57:01.062086 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:57:01.062069 2569 scope.go:117] "RemoveContainer" containerID="8ecccd5ab5c95e85420f2104a3864db0a12cf39a851eab57acf7313dae8f8b93" Apr 16 18:57:01.062278 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:57:01.062263 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ecccd5ab5c95e85420f2104a3864db0a12cf39a851eab57acf7313dae8f8b93\": container with ID starting with 8ecccd5ab5c95e85420f2104a3864db0a12cf39a851eab57acf7313dae8f8b93 not found: ID does not exist" containerID="8ecccd5ab5c95e85420f2104a3864db0a12cf39a851eab57acf7313dae8f8b93" Apr 16 18:57:01.062322 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:57:01.062282 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ecccd5ab5c95e85420f2104a3864db0a12cf39a851eab57acf7313dae8f8b93"} err="failed to get container status \"8ecccd5ab5c95e85420f2104a3864db0a12cf39a851eab57acf7313dae8f8b93\": rpc error: code = NotFound desc = could not find container \"8ecccd5ab5c95e85420f2104a3864db0a12cf39a851eab57acf7313dae8f8b93\": container with ID starting with 8ecccd5ab5c95e85420f2104a3864db0a12cf39a851eab57acf7313dae8f8b93 not found: ID does not exist" Apr 16 18:57:01.067655 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:57:01.067634 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-paddle-v2-kserve-predictor-679d448945-vvlqr"] Apr 16 18:57:01.071537 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:57:01.071517 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-paddle-v2-kserve-predictor-679d448945-vvlqr"] Apr 16 18:57:02.647904 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:57:02.647872 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1fa46a3-264b-44e7-bd3c-b3f05284ef65" path="/var/lib/kubelet/pods/a1fa46a3-264b-44e7-bd3c-b3f05284ef65/volumes" Apr 16 18:57:03.054662 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:57:03.054632 2569 generic.go:358] "Generic (PLEG): container finished" podID="e7043542-9dd8-4266-8b75-19892dce1aa9" containerID="92c879884a8c1f262b63ec0d44f87b31dd1774e118d52de88685993638ad05ef" exitCode=0 Apr 16 18:57:03.054827 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:57:03.054690 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-pmml-predictor-89795c578-f6b5g" event={"ID":"e7043542-9dd8-4266-8b75-19892dce1aa9","Type":"ContainerDied","Data":"92c879884a8c1f262b63ec0d44f87b31dd1774e118d52de88685993638ad05ef"} Apr 16 18:57:10.082037 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:57:10.081929 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-pmml-predictor-89795c578-f6b5g" event={"ID":"e7043542-9dd8-4266-8b75-19892dce1aa9","Type":"ContainerStarted","Data":"a03cc29e9eedcd62a5459e07a8de00223f744977dd9534f51f6915398097c106"} Apr 16 18:57:10.082433 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:57:10.082358 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-pmml-predictor-89795c578-f6b5g" Apr 16 18:57:10.083627 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:57:10.083602 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-pmml-predictor-89795c578-f6b5g" podUID="e7043542-9dd8-4266-8b75-19892dce1aa9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.25:8080: connect: connection refused" Apr 16 18:57:10.097632 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:57:10.097569 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-pmml-predictor-89795c578-f6b5g" podStartSLOduration=5.375321775 podStartE2EDuration="12.097555027s" podCreationTimestamp="2026-04-16 18:56:58 +0000 UTC" firstStartedPulling="2026-04-16 18:57:03.055909938 +0000 UTC m=+1607.033993364" lastFinishedPulling="2026-04-16 18:57:09.778143194 +0000 UTC m=+1613.756226616" observedRunningTime="2026-04-16 18:57:10.096826238 +0000 UTC m=+1614.074909683" watchObservedRunningTime="2026-04-16 18:57:10.097555027 +0000 UTC m=+1614.075638485" Apr 16 18:57:11.085795 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:57:11.085757 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-pmml-predictor-89795c578-f6b5g" podUID="e7043542-9dd8-4266-8b75-19892dce1aa9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.25:8080: connect: connection refused" Apr 16 18:57:21.086186 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:57:21.086137 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-pmml-predictor-89795c578-f6b5g" podUID="e7043542-9dd8-4266-8b75-19892dce1aa9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.25:8080: connect: connection refused" Apr 16 18:57:31.086412 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:57:31.086365 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-pmml-predictor-89795c578-f6b5g" podUID="e7043542-9dd8-4266-8b75-19892dce1aa9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.25:8080: connect: connection refused" Apr 16 18:57:41.086217 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:57:41.086173 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-pmml-predictor-89795c578-f6b5g" podUID="e7043542-9dd8-4266-8b75-19892dce1aa9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.25:8080: connect: connection refused" Apr 16 18:57:51.086234 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:57:51.086178 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-pmml-predictor-89795c578-f6b5g" podUID="e7043542-9dd8-4266-8b75-19892dce1aa9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.25:8080: connect: connection refused" Apr 16 18:58:01.086526 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:58:01.086483 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-pmml-predictor-89795c578-f6b5g" podUID="e7043542-9dd8-4266-8b75-19892dce1aa9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.25:8080: connect: connection refused" Apr 16 18:58:11.086552 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:58:11.086500 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-pmml-predictor-89795c578-f6b5g" podUID="e7043542-9dd8-4266-8b75-19892dce1aa9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.25:8080: connect: connection refused" Apr 16 18:58:21.085890 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:58:21.085844 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-pmml-predictor-89795c578-f6b5g" podUID="e7043542-9dd8-4266-8b75-19892dce1aa9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.25:8080: connect: connection refused" Apr 16 18:58:29.645505 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:58:29.645475 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-pmml-predictor-89795c578-f6b5g" Apr 16 18:58:38.958531 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:58:38.958446 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-pmml-predictor-89795c578-f6b5g"] Apr 16 18:58:38.958983 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:58:38.958796 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-pmml-predictor-89795c578-f6b5g" podUID="e7043542-9dd8-4266-8b75-19892dce1aa9" containerName="kserve-container" containerID="cri-o://a03cc29e9eedcd62a5459e07a8de00223f744977dd9534f51f6915398097c106" gracePeriod=30 Apr 16 18:58:39.645538 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:58:39.645499 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-pmml-predictor-89795c578-f6b5g" podUID="e7043542-9dd8-4266-8b75-19892dce1aa9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.25:8080: connect: connection refused" Apr 16 18:58:42.590034 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:58:42.590009 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-pmml-predictor-89795c578-f6b5g" Apr 16 18:58:42.744027 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:58:42.743996 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/e7043542-9dd8-4266-8b75-19892dce1aa9-kserve-provision-location\") pod \"e7043542-9dd8-4266-8b75-19892dce1aa9\" (UID: \"e7043542-9dd8-4266-8b75-19892dce1aa9\") " Apr 16 18:58:42.744405 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:58:42.744379 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7043542-9dd8-4266-8b75-19892dce1aa9-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "e7043542-9dd8-4266-8b75-19892dce1aa9" (UID: "e7043542-9dd8-4266-8b75-19892dce1aa9"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 18:58:42.844880 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:58:42.844840 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/e7043542-9dd8-4266-8b75-19892dce1aa9-kserve-provision-location\") on node \"ip-10-0-132-14.ec2.internal\" DevicePath \"\"" Apr 16 18:58:43.339129 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:58:43.339090 2569 generic.go:358] "Generic (PLEG): container finished" podID="e7043542-9dd8-4266-8b75-19892dce1aa9" containerID="a03cc29e9eedcd62a5459e07a8de00223f744977dd9534f51f6915398097c106" exitCode=0 Apr 16 18:58:43.339297 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:58:43.339154 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-pmml-predictor-89795c578-f6b5g" event={"ID":"e7043542-9dd8-4266-8b75-19892dce1aa9","Type":"ContainerDied","Data":"a03cc29e9eedcd62a5459e07a8de00223f744977dd9534f51f6915398097c106"} Apr 16 18:58:43.339297 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:58:43.339179 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-pmml-predictor-89795c578-f6b5g" event={"ID":"e7043542-9dd8-4266-8b75-19892dce1aa9","Type":"ContainerDied","Data":"df1870cfc33140d11b0455cb377d0b109ebe1f3a34074dc159e2ff341833d4bb"} Apr 16 18:58:43.339297 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:58:43.339196 2569 scope.go:117] "RemoveContainer" containerID="a03cc29e9eedcd62a5459e07a8de00223f744977dd9534f51f6915398097c106" Apr 16 18:58:43.339297 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:58:43.339158 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-pmml-predictor-89795c578-f6b5g" Apr 16 18:58:43.347321 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:58:43.347298 2569 scope.go:117] "RemoveContainer" containerID="92c879884a8c1f262b63ec0d44f87b31dd1774e118d52de88685993638ad05ef" Apr 16 18:58:43.354202 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:58:43.354185 2569 scope.go:117] "RemoveContainer" containerID="a03cc29e9eedcd62a5459e07a8de00223f744977dd9534f51f6915398097c106" Apr 16 18:58:43.354452 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:58:43.354436 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a03cc29e9eedcd62a5459e07a8de00223f744977dd9534f51f6915398097c106\": container with ID starting with a03cc29e9eedcd62a5459e07a8de00223f744977dd9534f51f6915398097c106 not found: ID does not exist" containerID="a03cc29e9eedcd62a5459e07a8de00223f744977dd9534f51f6915398097c106" Apr 16 18:58:43.354502 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:58:43.354460 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a03cc29e9eedcd62a5459e07a8de00223f744977dd9534f51f6915398097c106"} err="failed to get container status \"a03cc29e9eedcd62a5459e07a8de00223f744977dd9534f51f6915398097c106\": rpc error: code = NotFound desc = could not find container \"a03cc29e9eedcd62a5459e07a8de00223f744977dd9534f51f6915398097c106\": container with ID starting with a03cc29e9eedcd62a5459e07a8de00223f744977dd9534f51f6915398097c106 not found: ID does not exist" Apr 16 18:58:43.354502 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:58:43.354477 2569 scope.go:117] "RemoveContainer" containerID="92c879884a8c1f262b63ec0d44f87b31dd1774e118d52de88685993638ad05ef" Apr 16 18:58:43.354680 ip-10-0-132-14 kubenswrapper[2569]: E0416 18:58:43.354663 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92c879884a8c1f262b63ec0d44f87b31dd1774e118d52de88685993638ad05ef\": container with ID starting with 92c879884a8c1f262b63ec0d44f87b31dd1774e118d52de88685993638ad05ef not found: ID does not exist" containerID="92c879884a8c1f262b63ec0d44f87b31dd1774e118d52de88685993638ad05ef" Apr 16 18:58:43.354718 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:58:43.354684 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92c879884a8c1f262b63ec0d44f87b31dd1774e118d52de88685993638ad05ef"} err="failed to get container status \"92c879884a8c1f262b63ec0d44f87b31dd1774e118d52de88685993638ad05ef\": rpc error: code = NotFound desc = could not find container \"92c879884a8c1f262b63ec0d44f87b31dd1774e118d52de88685993638ad05ef\": container with ID starting with 92c879884a8c1f262b63ec0d44f87b31dd1774e118d52de88685993638ad05ef not found: ID does not exist" Apr 16 18:58:43.359481 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:58:43.359459 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-pmml-predictor-89795c578-f6b5g"] Apr 16 18:58:43.361984 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:58:43.361967 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-pmml-predictor-89795c578-f6b5g"] Apr 16 18:58:44.647836 ip-10-0-132-14 kubenswrapper[2569]: I0416 18:58:44.647802 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7043542-9dd8-4266-8b75-19892dce1aa9" path="/var/lib/kubelet/pods/e7043542-9dd8-4266-8b75-19892dce1aa9/volumes" Apr 16 19:00:20.721945 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:00:20.721914 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-5b65fdc9dd-b5rhm"] Apr 16 19:00:20.722538 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:00:20.722308 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a1fa46a3-264b-44e7-bd3c-b3f05284ef65" containerName="storage-initializer" Apr 16 19:00:20.722538 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:00:20.722327 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1fa46a3-264b-44e7-bd3c-b3f05284ef65" containerName="storage-initializer" Apr 16 19:00:20.722538 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:00:20.722354 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e7043542-9dd8-4266-8b75-19892dce1aa9" containerName="storage-initializer" Apr 16 19:00:20.722538 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:00:20.722376 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7043542-9dd8-4266-8b75-19892dce1aa9" containerName="storage-initializer" Apr 16 19:00:20.722538 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:00:20.722396 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a1fa46a3-264b-44e7-bd3c-b3f05284ef65" containerName="kserve-container" Apr 16 19:00:20.722538 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:00:20.722404 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1fa46a3-264b-44e7-bd3c-b3f05284ef65" containerName="kserve-container" Apr 16 19:00:20.722538 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:00:20.722415 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e7043542-9dd8-4266-8b75-19892dce1aa9" containerName="kserve-container" Apr 16 19:00:20.722538 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:00:20.722424 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7043542-9dd8-4266-8b75-19892dce1aa9" containerName="kserve-container" Apr 16 19:00:20.722538 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:00:20.722503 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="e7043542-9dd8-4266-8b75-19892dce1aa9" containerName="kserve-container" Apr 16 19:00:20.722538 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:00:20.722515 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="a1fa46a3-264b-44e7-bd3c-b3f05284ef65" containerName="kserve-container" Apr 16 19:00:20.725690 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:00:20.725669 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-5b65fdc9dd-b5rhm" Apr 16 19:00:20.727511 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:00:20.727492 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"default-dockercfg-z66mq\"" Apr 16 19:00:20.733032 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:00:20.733008 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-5b65fdc9dd-b5rhm"] Apr 16 19:00:20.850137 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:00:20.850096 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/ec3895bd-f601-4c07-a92d-ad9bb96fd5f4-kserve-provision-location\") pod \"isvc-pmml-v2-kserve-predictor-5b65fdc9dd-b5rhm\" (UID: \"ec3895bd-f601-4c07-a92d-ad9bb96fd5f4\") " pod="kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-5b65fdc9dd-b5rhm" Apr 16 19:00:20.951122 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:00:20.951090 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/ec3895bd-f601-4c07-a92d-ad9bb96fd5f4-kserve-provision-location\") pod \"isvc-pmml-v2-kserve-predictor-5b65fdc9dd-b5rhm\" (UID: \"ec3895bd-f601-4c07-a92d-ad9bb96fd5f4\") " pod="kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-5b65fdc9dd-b5rhm" Apr 16 19:00:20.951459 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:00:20.951443 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/ec3895bd-f601-4c07-a92d-ad9bb96fd5f4-kserve-provision-location\") pod \"isvc-pmml-v2-kserve-predictor-5b65fdc9dd-b5rhm\" (UID: \"ec3895bd-f601-4c07-a92d-ad9bb96fd5f4\") " pod="kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-5b65fdc9dd-b5rhm" Apr 16 19:00:21.035922 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:00:21.035835 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-5b65fdc9dd-b5rhm" Apr 16 19:00:21.145692 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:00:21.145661 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-5b65fdc9dd-b5rhm"] Apr 16 19:00:21.148970 ip-10-0-132-14 kubenswrapper[2569]: W0416 19:00:21.148943 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec3895bd_f601_4c07_a92d_ad9bb96fd5f4.slice/crio-68f17431e5fb12e7ababfde2d7ef1d19dca749e244bfa9e027a0846772ef6614 WatchSource:0}: Error finding container 68f17431e5fb12e7ababfde2d7ef1d19dca749e244bfa9e027a0846772ef6614: Status 404 returned error can't find the container with id 68f17431e5fb12e7ababfde2d7ef1d19dca749e244bfa9e027a0846772ef6614 Apr 16 19:00:21.609409 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:00:21.609374 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-5b65fdc9dd-b5rhm" event={"ID":"ec3895bd-f601-4c07-a92d-ad9bb96fd5f4","Type":"ContainerStarted","Data":"a6e73e6af14dc9531e139861072cb9f9b279ac443ce49d8b40d740d4c3a590a1"} Apr 16 19:00:21.609589 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:00:21.609418 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-5b65fdc9dd-b5rhm" event={"ID":"ec3895bd-f601-4c07-a92d-ad9bb96fd5f4","Type":"ContainerStarted","Data":"68f17431e5fb12e7ababfde2d7ef1d19dca749e244bfa9e027a0846772ef6614"} Apr 16 19:00:25.622051 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:00:25.622017 2569 generic.go:358] "Generic (PLEG): container finished" podID="ec3895bd-f601-4c07-a92d-ad9bb96fd5f4" containerID="a6e73e6af14dc9531e139861072cb9f9b279ac443ce49d8b40d740d4c3a590a1" exitCode=0 Apr 16 19:00:25.622462 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:00:25.622072 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-5b65fdc9dd-b5rhm" event={"ID":"ec3895bd-f601-4c07-a92d-ad9bb96fd5f4","Type":"ContainerDied","Data":"a6e73e6af14dc9531e139861072cb9f9b279ac443ce49d8b40d740d4c3a590a1"} Apr 16 19:00:26.626659 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:00:26.626627 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-5b65fdc9dd-b5rhm" event={"ID":"ec3895bd-f601-4c07-a92d-ad9bb96fd5f4","Type":"ContainerStarted","Data":"78b451a60d68330733f4c54f57182cb846506117982b21a17d63241797b368b3"} Apr 16 19:00:26.627069 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:00:26.626909 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-5b65fdc9dd-b5rhm" Apr 16 19:00:26.628121 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:00:26.628083 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-5b65fdc9dd-b5rhm" podUID="ec3895bd-f601-4c07-a92d-ad9bb96fd5f4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.26:8080: connect: connection refused" Apr 16 19:00:26.642266 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:00:26.642136 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-5b65fdc9dd-b5rhm" podStartSLOduration=6.642121051 podStartE2EDuration="6.642121051s" podCreationTimestamp="2026-04-16 19:00:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 19:00:26.641572154 +0000 UTC m=+1810.619655593" watchObservedRunningTime="2026-04-16 19:00:26.642121051 +0000 UTC m=+1810.620204497" Apr 16 19:00:27.630119 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:00:27.630084 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-5b65fdc9dd-b5rhm" podUID="ec3895bd-f601-4c07-a92d-ad9bb96fd5f4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.26:8080: connect: connection refused" Apr 16 19:00:37.630732 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:00:37.630688 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-5b65fdc9dd-b5rhm" podUID="ec3895bd-f601-4c07-a92d-ad9bb96fd5f4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.26:8080: connect: connection refused" Apr 16 19:00:47.630909 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:00:47.630863 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-5b65fdc9dd-b5rhm" podUID="ec3895bd-f601-4c07-a92d-ad9bb96fd5f4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.26:8080: connect: connection refused" Apr 16 19:00:57.630635 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:00:57.630590 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-5b65fdc9dd-b5rhm" podUID="ec3895bd-f601-4c07-a92d-ad9bb96fd5f4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.26:8080: connect: connection refused" Apr 16 19:01:07.630964 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:01:07.630915 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-5b65fdc9dd-b5rhm" podUID="ec3895bd-f601-4c07-a92d-ad9bb96fd5f4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.26:8080: connect: connection refused" Apr 16 19:01:17.631072 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:01:17.631028 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-5b65fdc9dd-b5rhm" podUID="ec3895bd-f601-4c07-a92d-ad9bb96fd5f4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.26:8080: connect: connection refused" Apr 16 19:01:27.630667 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:01:27.630624 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-5b65fdc9dd-b5rhm" podUID="ec3895bd-f601-4c07-a92d-ad9bb96fd5f4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.26:8080: connect: connection refused" Apr 16 19:01:28.643730 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:01:28.643683 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-5b65fdc9dd-b5rhm" podUID="ec3895bd-f601-4c07-a92d-ad9bb96fd5f4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.26:8080: connect: connection refused" Apr 16 19:01:38.644512 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:01:38.644419 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-5b65fdc9dd-b5rhm" podUID="ec3895bd-f601-4c07-a92d-ad9bb96fd5f4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.26:8080: connect: connection refused" Apr 16 19:01:48.647715 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:01:48.647688 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-5b65fdc9dd-b5rhm" Apr 16 19:01:52.136483 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:01:52.136452 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-5b65fdc9dd-b5rhm"] Apr 16 19:01:52.136933 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:01:52.136734 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-5b65fdc9dd-b5rhm" podUID="ec3895bd-f601-4c07-a92d-ad9bb96fd5f4" containerName="kserve-container" containerID="cri-o://78b451a60d68330733f4c54f57182cb846506117982b21a17d63241797b368b3" gracePeriod=30 Apr 16 19:01:55.667656 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:01:55.667634 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-5b65fdc9dd-b5rhm" Apr 16 19:01:55.713316 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:01:55.713288 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/ec3895bd-f601-4c07-a92d-ad9bb96fd5f4-kserve-provision-location\") pod \"ec3895bd-f601-4c07-a92d-ad9bb96fd5f4\" (UID: \"ec3895bd-f601-4c07-a92d-ad9bb96fd5f4\") " Apr 16 19:01:55.713654 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:01:55.713625 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec3895bd-f601-4c07-a92d-ad9bb96fd5f4-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "ec3895bd-f601-4c07-a92d-ad9bb96fd5f4" (UID: "ec3895bd-f601-4c07-a92d-ad9bb96fd5f4"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 19:01:55.814616 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:01:55.814542 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/ec3895bd-f601-4c07-a92d-ad9bb96fd5f4-kserve-provision-location\") on node \"ip-10-0-132-14.ec2.internal\" DevicePath \"\"" Apr 16 19:01:55.871076 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:01:55.871040 2569 generic.go:358] "Generic (PLEG): container finished" podID="ec3895bd-f601-4c07-a92d-ad9bb96fd5f4" containerID="78b451a60d68330733f4c54f57182cb846506117982b21a17d63241797b368b3" exitCode=0 Apr 16 19:01:55.871202 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:01:55.871109 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-5b65fdc9dd-b5rhm" Apr 16 19:01:55.871202 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:01:55.871120 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-5b65fdc9dd-b5rhm" event={"ID":"ec3895bd-f601-4c07-a92d-ad9bb96fd5f4","Type":"ContainerDied","Data":"78b451a60d68330733f4c54f57182cb846506117982b21a17d63241797b368b3"} Apr 16 19:01:55.871202 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:01:55.871158 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-5b65fdc9dd-b5rhm" event={"ID":"ec3895bd-f601-4c07-a92d-ad9bb96fd5f4","Type":"ContainerDied","Data":"68f17431e5fb12e7ababfde2d7ef1d19dca749e244bfa9e027a0846772ef6614"} Apr 16 19:01:55.871202 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:01:55.871173 2569 scope.go:117] "RemoveContainer" containerID="78b451a60d68330733f4c54f57182cb846506117982b21a17d63241797b368b3" Apr 16 19:01:55.878951 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:01:55.878923 2569 scope.go:117] "RemoveContainer" containerID="a6e73e6af14dc9531e139861072cb9f9b279ac443ce49d8b40d740d4c3a590a1" Apr 16 19:01:55.885692 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:01:55.885678 2569 scope.go:117] "RemoveContainer" containerID="78b451a60d68330733f4c54f57182cb846506117982b21a17d63241797b368b3" Apr 16 19:01:55.885893 ip-10-0-132-14 kubenswrapper[2569]: E0416 19:01:55.885876 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78b451a60d68330733f4c54f57182cb846506117982b21a17d63241797b368b3\": container with ID starting with 78b451a60d68330733f4c54f57182cb846506117982b21a17d63241797b368b3 not found: ID does not exist" containerID="78b451a60d68330733f4c54f57182cb846506117982b21a17d63241797b368b3" Apr 16 19:01:55.885955 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:01:55.885898 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78b451a60d68330733f4c54f57182cb846506117982b21a17d63241797b368b3"} err="failed to get container status \"78b451a60d68330733f4c54f57182cb846506117982b21a17d63241797b368b3\": rpc error: code = NotFound desc = could not find container \"78b451a60d68330733f4c54f57182cb846506117982b21a17d63241797b368b3\": container with ID starting with 78b451a60d68330733f4c54f57182cb846506117982b21a17d63241797b368b3 not found: ID does not exist" Apr 16 19:01:55.885955 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:01:55.885913 2569 scope.go:117] "RemoveContainer" containerID="a6e73e6af14dc9531e139861072cb9f9b279ac443ce49d8b40d740d4c3a590a1" Apr 16 19:01:55.886124 ip-10-0-132-14 kubenswrapper[2569]: E0416 19:01:55.886109 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6e73e6af14dc9531e139861072cb9f9b279ac443ce49d8b40d740d4c3a590a1\": container with ID starting with a6e73e6af14dc9531e139861072cb9f9b279ac443ce49d8b40d740d4c3a590a1 not found: ID does not exist" containerID="a6e73e6af14dc9531e139861072cb9f9b279ac443ce49d8b40d740d4c3a590a1" Apr 16 19:01:55.886165 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:01:55.886128 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6e73e6af14dc9531e139861072cb9f9b279ac443ce49d8b40d740d4c3a590a1"} err="failed to get container status \"a6e73e6af14dc9531e139861072cb9f9b279ac443ce49d8b40d740d4c3a590a1\": rpc error: code = NotFound desc = could not find container \"a6e73e6af14dc9531e139861072cb9f9b279ac443ce49d8b40d740d4c3a590a1\": container with ID starting with a6e73e6af14dc9531e139861072cb9f9b279ac443ce49d8b40d740d4c3a590a1 not found: ID does not exist" Apr 16 19:01:55.891965 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:01:55.891946 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-5b65fdc9dd-b5rhm"] Apr 16 19:01:55.897424 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:01:55.897405 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-5b65fdc9dd-b5rhm"] Apr 16 19:01:56.648377 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:01:56.648324 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec3895bd-f601-4c07-a92d-ad9bb96fd5f4" path="/var/lib/kubelet/pods/ec3895bd-f601-4c07-a92d-ad9bb96fd5f4/volumes" Apr 16 19:03:38.786968 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:03:38.786930 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-5864bd8d8b-vgh4p"] Apr 16 19:03:38.787430 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:03:38.787188 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ec3895bd-f601-4c07-a92d-ad9bb96fd5f4" containerName="kserve-container" Apr 16 19:03:38.787430 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:03:38.787198 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec3895bd-f601-4c07-a92d-ad9bb96fd5f4" containerName="kserve-container" Apr 16 19:03:38.787430 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:03:38.787211 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ec3895bd-f601-4c07-a92d-ad9bb96fd5f4" containerName="storage-initializer" Apr 16 19:03:38.787430 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:03:38.787216 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec3895bd-f601-4c07-a92d-ad9bb96fd5f4" containerName="storage-initializer" Apr 16 19:03:38.787430 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:03:38.787268 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="ec3895bd-f601-4c07-a92d-ad9bb96fd5f4" containerName="kserve-container" Apr 16 19:03:38.793625 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:03:38.793598 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-5864bd8d8b-vgh4p" Apr 16 19:03:38.795550 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:03:38.795504 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"default-dockercfg-z66mq\"" Apr 16 19:03:38.797353 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:03:38.797311 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-5864bd8d8b-vgh4p"] Apr 16 19:03:38.865875 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:03:38.865838 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/820ba194-3f0d-4fa0-b30c-28ce07a6bc34-kserve-provision-location\") pod \"isvc-predictive-sklearn-predictor-5864bd8d8b-vgh4p\" (UID: \"820ba194-3f0d-4fa0-b30c-28ce07a6bc34\") " pod="kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-5864bd8d8b-vgh4p" Apr 16 19:03:38.967099 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:03:38.967061 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/820ba194-3f0d-4fa0-b30c-28ce07a6bc34-kserve-provision-location\") pod \"isvc-predictive-sklearn-predictor-5864bd8d8b-vgh4p\" (UID: \"820ba194-3f0d-4fa0-b30c-28ce07a6bc34\") " pod="kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-5864bd8d8b-vgh4p" Apr 16 19:03:38.967471 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:03:38.967451 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/820ba194-3f0d-4fa0-b30c-28ce07a6bc34-kserve-provision-location\") pod \"isvc-predictive-sklearn-predictor-5864bd8d8b-vgh4p\" (UID: \"820ba194-3f0d-4fa0-b30c-28ce07a6bc34\") " pod="kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-5864bd8d8b-vgh4p" Apr 16 19:03:39.104596 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:03:39.104508 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-5864bd8d8b-vgh4p" Apr 16 19:03:39.229303 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:03:39.229274 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-5864bd8d8b-vgh4p"] Apr 16 19:03:39.232376 ip-10-0-132-14 kubenswrapper[2569]: W0416 19:03:39.232348 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod820ba194_3f0d_4fa0_b30c_28ce07a6bc34.slice/crio-7374d48bc88c9279d913da2993a760045689116341ba3930ee2b3af94e4d7293 WatchSource:0}: Error finding container 7374d48bc88c9279d913da2993a760045689116341ba3930ee2b3af94e4d7293: Status 404 returned error can't find the container with id 7374d48bc88c9279d913da2993a760045689116341ba3930ee2b3af94e4d7293 Apr 16 19:03:39.234116 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:03:39.234095 2569 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 19:03:40.162147 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:03:40.162106 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-5864bd8d8b-vgh4p" event={"ID":"820ba194-3f0d-4fa0-b30c-28ce07a6bc34","Type":"ContainerStarted","Data":"44b042bc937d13185043a361160c43a475cc8cd4d1ddacf8f860c0cb94157a63"} Apr 16 19:03:40.162147 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:03:40.162149 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-5864bd8d8b-vgh4p" event={"ID":"820ba194-3f0d-4fa0-b30c-28ce07a6bc34","Type":"ContainerStarted","Data":"7374d48bc88c9279d913da2993a760045689116341ba3930ee2b3af94e4d7293"} Apr 16 19:03:44.173417 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:03:44.173380 2569 generic.go:358] "Generic (PLEG): container finished" podID="820ba194-3f0d-4fa0-b30c-28ce07a6bc34" containerID="44b042bc937d13185043a361160c43a475cc8cd4d1ddacf8f860c0cb94157a63" exitCode=0 Apr 16 19:03:44.173825 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:03:44.173450 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-5864bd8d8b-vgh4p" event={"ID":"820ba194-3f0d-4fa0-b30c-28ce07a6bc34","Type":"ContainerDied","Data":"44b042bc937d13185043a361160c43a475cc8cd4d1ddacf8f860c0cb94157a63"} Apr 16 19:04:05.247709 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:04:05.247628 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-5864bd8d8b-vgh4p" event={"ID":"820ba194-3f0d-4fa0-b30c-28ce07a6bc34","Type":"ContainerStarted","Data":"95fe02933b49142aeb6968eec4b6bf353e90de3aeef1af896635ddbce992cb58"} Apr 16 19:04:05.248134 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:04:05.247911 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-5864bd8d8b-vgh4p" Apr 16 19:04:05.249180 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:04:05.249152 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-5864bd8d8b-vgh4p" podUID="820ba194-3f0d-4fa0-b30c-28ce07a6bc34" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.27:8080: connect: connection refused" Apr 16 19:04:05.261697 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:04:05.261649 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-5864bd8d8b-vgh4p" podStartSLOduration=6.447447893 podStartE2EDuration="27.261638959s" podCreationTimestamp="2026-04-16 19:03:38 +0000 UTC" firstStartedPulling="2026-04-16 19:03:44.17451614 +0000 UTC m=+2008.152599563" lastFinishedPulling="2026-04-16 19:04:04.988707207 +0000 UTC m=+2028.966790629" observedRunningTime="2026-04-16 19:04:05.261446739 +0000 UTC m=+2029.239530184" watchObservedRunningTime="2026-04-16 19:04:05.261638959 +0000 UTC m=+2029.239722407" Apr 16 19:04:06.251011 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:04:06.250974 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-5864bd8d8b-vgh4p" podUID="820ba194-3f0d-4fa0-b30c-28ce07a6bc34" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.27:8080: connect: connection refused" Apr 16 19:04:16.251870 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:04:16.251826 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-5864bd8d8b-vgh4p" podUID="820ba194-3f0d-4fa0-b30c-28ce07a6bc34" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.27:8080: connect: connection refused" Apr 16 19:04:26.251232 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:04:26.251185 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-5864bd8d8b-vgh4p" podUID="820ba194-3f0d-4fa0-b30c-28ce07a6bc34" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.27:8080: connect: connection refused" Apr 16 19:04:36.251816 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:04:36.251723 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-5864bd8d8b-vgh4p" podUID="820ba194-3f0d-4fa0-b30c-28ce07a6bc34" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.27:8080: connect: connection refused" Apr 16 19:04:46.251279 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:04:46.251231 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-5864bd8d8b-vgh4p" podUID="820ba194-3f0d-4fa0-b30c-28ce07a6bc34" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.27:8080: connect: connection refused" Apr 16 19:04:56.251986 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:04:56.251941 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-5864bd8d8b-vgh4p" podUID="820ba194-3f0d-4fa0-b30c-28ce07a6bc34" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.27:8080: connect: connection refused" Apr 16 19:05:06.251204 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:06.251157 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-5864bd8d8b-vgh4p" podUID="820ba194-3f0d-4fa0-b30c-28ce07a6bc34" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.27:8080: connect: connection refused" Apr 16 19:05:08.644760 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:08.644714 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-5864bd8d8b-vgh4p" podUID="820ba194-3f0d-4fa0-b30c-28ce07a6bc34" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.27:8080: connect: connection refused" Apr 16 19:05:18.647530 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:18.647499 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-5864bd8d8b-vgh4p" Apr 16 19:05:28.904563 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:28.904532 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-5864bd8d8b-vgh4p"] Apr 16 19:05:28.904932 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:28.904803 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-5864bd8d8b-vgh4p" podUID="820ba194-3f0d-4fa0-b30c-28ce07a6bc34" containerName="kserve-container" containerID="cri-o://95fe02933b49142aeb6968eec4b6bf353e90de3aeef1af896635ddbce992cb58" gracePeriod=30 Apr 16 19:05:28.976277 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:28.976243 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-577fdc969f-d8sld"] Apr 16 19:05:28.979699 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:28.979681 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-577fdc969f-d8sld" Apr 16 19:05:28.986697 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:28.986514 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-577fdc969f-d8sld"] Apr 16 19:05:29.105994 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:29.105957 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/8b74b66d-7537-4046-a91d-b8946eecbbbb-kserve-provision-location\") pod \"isvc-predictive-xgboost-predictor-577fdc969f-d8sld\" (UID: \"8b74b66d-7537-4046-a91d-b8946eecbbbb\") " pod="kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-577fdc969f-d8sld" Apr 16 19:05:29.207278 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:29.207245 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/8b74b66d-7537-4046-a91d-b8946eecbbbb-kserve-provision-location\") pod \"isvc-predictive-xgboost-predictor-577fdc969f-d8sld\" (UID: \"8b74b66d-7537-4046-a91d-b8946eecbbbb\") " pod="kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-577fdc969f-d8sld" Apr 16 19:05:29.207640 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:29.207621 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/8b74b66d-7537-4046-a91d-b8946eecbbbb-kserve-provision-location\") pod \"isvc-predictive-xgboost-predictor-577fdc969f-d8sld\" (UID: \"8b74b66d-7537-4046-a91d-b8946eecbbbb\") " pod="kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-577fdc969f-d8sld" Apr 16 19:05:29.290009 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:29.289930 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-577fdc969f-d8sld" Apr 16 19:05:29.402837 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:29.402802 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-577fdc969f-d8sld"] Apr 16 19:05:29.405732 ip-10-0-132-14 kubenswrapper[2569]: W0416 19:05:29.405704 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b74b66d_7537_4046_a91d_b8946eecbbbb.slice/crio-c0962ff9ecdd1cdb9a34516797e26a35ee7891eda2faa9360c97cc8e0df2ac21 WatchSource:0}: Error finding container c0962ff9ecdd1cdb9a34516797e26a35ee7891eda2faa9360c97cc8e0df2ac21: Status 404 returned error can't find the container with id c0962ff9ecdd1cdb9a34516797e26a35ee7891eda2faa9360c97cc8e0df2ac21 Apr 16 19:05:29.474897 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:29.474868 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-577fdc969f-d8sld" event={"ID":"8b74b66d-7537-4046-a91d-b8946eecbbbb","Type":"ContainerStarted","Data":"aada645256295cd24b98275255a3ab33338802b6d0e1e891663ca45cdfc8b08b"} Apr 16 19:05:29.475013 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:29.474907 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-577fdc969f-d8sld" event={"ID":"8b74b66d-7537-4046-a91d-b8946eecbbbb","Type":"ContainerStarted","Data":"c0962ff9ecdd1cdb9a34516797e26a35ee7891eda2faa9360c97cc8e0df2ac21"} Apr 16 19:05:33.487171 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:33.487139 2569 generic.go:358] "Generic (PLEG): container finished" podID="8b74b66d-7537-4046-a91d-b8946eecbbbb" containerID="aada645256295cd24b98275255a3ab33338802b6d0e1e891663ca45cdfc8b08b" exitCode=0 Apr 16 19:05:33.487562 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:33.487215 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-577fdc969f-d8sld" event={"ID":"8b74b66d-7537-4046-a91d-b8946eecbbbb","Type":"ContainerDied","Data":"aada645256295cd24b98275255a3ab33338802b6d0e1e891663ca45cdfc8b08b"} Apr 16 19:05:33.741245 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:33.741223 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-5864bd8d8b-vgh4p" Apr 16 19:05:33.847532 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:33.847441 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/820ba194-3f0d-4fa0-b30c-28ce07a6bc34-kserve-provision-location\") pod \"820ba194-3f0d-4fa0-b30c-28ce07a6bc34\" (UID: \"820ba194-3f0d-4fa0-b30c-28ce07a6bc34\") " Apr 16 19:05:33.847780 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:33.847757 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/820ba194-3f0d-4fa0-b30c-28ce07a6bc34-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "820ba194-3f0d-4fa0-b30c-28ce07a6bc34" (UID: "820ba194-3f0d-4fa0-b30c-28ce07a6bc34"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 19:05:33.948642 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:33.948605 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/820ba194-3f0d-4fa0-b30c-28ce07a6bc34-kserve-provision-location\") on node \"ip-10-0-132-14.ec2.internal\" DevicePath \"\"" Apr 16 19:05:34.492443 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:34.492397 2569 generic.go:358] "Generic (PLEG): container finished" podID="820ba194-3f0d-4fa0-b30c-28ce07a6bc34" containerID="95fe02933b49142aeb6968eec4b6bf353e90de3aeef1af896635ddbce992cb58" exitCode=0 Apr 16 19:05:34.493023 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:34.492490 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-5864bd8d8b-vgh4p" Apr 16 19:05:34.493023 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:34.492490 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-5864bd8d8b-vgh4p" event={"ID":"820ba194-3f0d-4fa0-b30c-28ce07a6bc34","Type":"ContainerDied","Data":"95fe02933b49142aeb6968eec4b6bf353e90de3aeef1af896635ddbce992cb58"} Apr 16 19:05:34.493023 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:34.492554 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-5864bd8d8b-vgh4p" event={"ID":"820ba194-3f0d-4fa0-b30c-28ce07a6bc34","Type":"ContainerDied","Data":"7374d48bc88c9279d913da2993a760045689116341ba3930ee2b3af94e4d7293"} Apr 16 19:05:34.493023 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:34.492587 2569 scope.go:117] "RemoveContainer" containerID="95fe02933b49142aeb6968eec4b6bf353e90de3aeef1af896635ddbce992cb58" Apr 16 19:05:34.494509 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:34.494486 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-577fdc969f-d8sld" event={"ID":"8b74b66d-7537-4046-a91d-b8946eecbbbb","Type":"ContainerStarted","Data":"59e1594e8919827d3c011e7249304b762af6c07ba940027afad613f0402cee82"} Apr 16 19:05:34.494815 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:34.494790 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-577fdc969f-d8sld" Apr 16 19:05:34.496976 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:34.496943 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-577fdc969f-d8sld" podUID="8b74b66d-7537-4046-a91d-b8946eecbbbb" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.28:8080: connect: connection refused" Apr 16 19:05:34.503156 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:34.503134 2569 scope.go:117] "RemoveContainer" containerID="44b042bc937d13185043a361160c43a475cc8cd4d1ddacf8f860c0cb94157a63" Apr 16 19:05:34.510440 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:34.510421 2569 scope.go:117] "RemoveContainer" containerID="95fe02933b49142aeb6968eec4b6bf353e90de3aeef1af896635ddbce992cb58" Apr 16 19:05:34.510699 ip-10-0-132-14 kubenswrapper[2569]: E0416 19:05:34.510683 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95fe02933b49142aeb6968eec4b6bf353e90de3aeef1af896635ddbce992cb58\": container with ID starting with 95fe02933b49142aeb6968eec4b6bf353e90de3aeef1af896635ddbce992cb58 not found: ID does not exist" containerID="95fe02933b49142aeb6968eec4b6bf353e90de3aeef1af896635ddbce992cb58" Apr 16 19:05:34.510769 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:34.510708 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95fe02933b49142aeb6968eec4b6bf353e90de3aeef1af896635ddbce992cb58"} err="failed to get container status \"95fe02933b49142aeb6968eec4b6bf353e90de3aeef1af896635ddbce992cb58\": rpc error: code = NotFound desc = could not find container \"95fe02933b49142aeb6968eec4b6bf353e90de3aeef1af896635ddbce992cb58\": container with ID starting with 95fe02933b49142aeb6968eec4b6bf353e90de3aeef1af896635ddbce992cb58 not found: ID does not exist" Apr 16 19:05:34.510769 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:34.510727 2569 scope.go:117] "RemoveContainer" containerID="44b042bc937d13185043a361160c43a475cc8cd4d1ddacf8f860c0cb94157a63" Apr 16 19:05:34.510865 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:34.510827 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-577fdc969f-d8sld" podStartSLOduration=6.51081524 podStartE2EDuration="6.51081524s" podCreationTimestamp="2026-04-16 19:05:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 19:05:34.508483847 +0000 UTC m=+2118.486567302" watchObservedRunningTime="2026-04-16 19:05:34.51081524 +0000 UTC m=+2118.488898745" Apr 16 19:05:34.510996 ip-10-0-132-14 kubenswrapper[2569]: E0416 19:05:34.510976 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44b042bc937d13185043a361160c43a475cc8cd4d1ddacf8f860c0cb94157a63\": container with ID starting with 44b042bc937d13185043a361160c43a475cc8cd4d1ddacf8f860c0cb94157a63 not found: ID does not exist" containerID="44b042bc937d13185043a361160c43a475cc8cd4d1ddacf8f860c0cb94157a63" Apr 16 19:05:34.511086 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:34.511004 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44b042bc937d13185043a361160c43a475cc8cd4d1ddacf8f860c0cb94157a63"} err="failed to get container status \"44b042bc937d13185043a361160c43a475cc8cd4d1ddacf8f860c0cb94157a63\": rpc error: code = NotFound desc = could not find container \"44b042bc937d13185043a361160c43a475cc8cd4d1ddacf8f860c0cb94157a63\": container with ID starting with 44b042bc937d13185043a361160c43a475cc8cd4d1ddacf8f860c0cb94157a63 not found: ID does not exist" Apr 16 19:05:34.521191 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:34.521165 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-5864bd8d8b-vgh4p"] Apr 16 19:05:34.524865 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:34.524844 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-5864bd8d8b-vgh4p"] Apr 16 19:05:34.647514 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:34.647474 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="820ba194-3f0d-4fa0-b30c-28ce07a6bc34" path="/var/lib/kubelet/pods/820ba194-3f0d-4fa0-b30c-28ce07a6bc34/volumes" Apr 16 19:05:35.498839 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:35.498799 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-577fdc969f-d8sld" podUID="8b74b66d-7537-4046-a91d-b8946eecbbbb" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.28:8080: connect: connection refused" Apr 16 19:05:45.499221 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:45.499170 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-577fdc969f-d8sld" podUID="8b74b66d-7537-4046-a91d-b8946eecbbbb" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.28:8080: connect: connection refused" Apr 16 19:05:55.499439 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:05:55.499392 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-577fdc969f-d8sld" podUID="8b74b66d-7537-4046-a91d-b8946eecbbbb" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.28:8080: connect: connection refused" Apr 16 19:06:05.499888 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:06:05.499793 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-577fdc969f-d8sld" podUID="8b74b66d-7537-4046-a91d-b8946eecbbbb" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.28:8080: connect: connection refused" Apr 16 19:06:15.498939 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:06:15.498894 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-577fdc969f-d8sld" podUID="8b74b66d-7537-4046-a91d-b8946eecbbbb" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.28:8080: connect: connection refused" Apr 16 19:06:25.499023 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:06:25.498975 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-577fdc969f-d8sld" podUID="8b74b66d-7537-4046-a91d-b8946eecbbbb" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.28:8080: connect: connection refused" Apr 16 19:06:35.499446 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:06:35.499400 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-577fdc969f-d8sld" podUID="8b74b66d-7537-4046-a91d-b8946eecbbbb" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.28:8080: connect: connection refused" Apr 16 19:06:45.500565 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:06:45.500529 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-577fdc969f-d8sld" Apr 16 19:06:49.089784 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:06:49.089748 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-577fdc969f-d8sld"] Apr 16 19:06:49.090285 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:06:49.090002 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-577fdc969f-d8sld" podUID="8b74b66d-7537-4046-a91d-b8946eecbbbb" containerName="kserve-container" containerID="cri-o://59e1594e8919827d3c011e7249304b762af6c07ba940027afad613f0402cee82" gracePeriod=30 Apr 16 19:06:53.931908 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:06:53.931883 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-577fdc969f-d8sld" Apr 16 19:06:54.042417 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:06:54.042314 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/8b74b66d-7537-4046-a91d-b8946eecbbbb-kserve-provision-location\") pod \"8b74b66d-7537-4046-a91d-b8946eecbbbb\" (UID: \"8b74b66d-7537-4046-a91d-b8946eecbbbb\") " Apr 16 19:06:54.042630 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:06:54.042606 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b74b66d-7537-4046-a91d-b8946eecbbbb-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "8b74b66d-7537-4046-a91d-b8946eecbbbb" (UID: "8b74b66d-7537-4046-a91d-b8946eecbbbb"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 19:06:54.143820 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:06:54.143775 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/8b74b66d-7537-4046-a91d-b8946eecbbbb-kserve-provision-location\") on node \"ip-10-0-132-14.ec2.internal\" DevicePath \"\"" Apr 16 19:06:54.725240 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:06:54.725205 2569 generic.go:358] "Generic (PLEG): container finished" podID="8b74b66d-7537-4046-a91d-b8946eecbbbb" containerID="59e1594e8919827d3c011e7249304b762af6c07ba940027afad613f0402cee82" exitCode=0 Apr 16 19:06:54.725432 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:06:54.725288 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-577fdc969f-d8sld" Apr 16 19:06:54.725432 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:06:54.725307 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-577fdc969f-d8sld" event={"ID":"8b74b66d-7537-4046-a91d-b8946eecbbbb","Type":"ContainerDied","Data":"59e1594e8919827d3c011e7249304b762af6c07ba940027afad613f0402cee82"} Apr 16 19:06:54.725432 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:06:54.725329 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-577fdc969f-d8sld" event={"ID":"8b74b66d-7537-4046-a91d-b8946eecbbbb","Type":"ContainerDied","Data":"c0962ff9ecdd1cdb9a34516797e26a35ee7891eda2faa9360c97cc8e0df2ac21"} Apr 16 19:06:54.725432 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:06:54.725365 2569 scope.go:117] "RemoveContainer" containerID="59e1594e8919827d3c011e7249304b762af6c07ba940027afad613f0402cee82" Apr 16 19:06:54.733241 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:06:54.733221 2569 scope.go:117] "RemoveContainer" containerID="aada645256295cd24b98275255a3ab33338802b6d0e1e891663ca45cdfc8b08b" Apr 16 19:06:54.739948 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:06:54.739924 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-577fdc969f-d8sld"] Apr 16 19:06:54.740509 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:06:54.740489 2569 scope.go:117] "RemoveContainer" containerID="59e1594e8919827d3c011e7249304b762af6c07ba940027afad613f0402cee82" Apr 16 19:06:54.740810 ip-10-0-132-14 kubenswrapper[2569]: E0416 19:06:54.740789 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59e1594e8919827d3c011e7249304b762af6c07ba940027afad613f0402cee82\": container with ID starting with 59e1594e8919827d3c011e7249304b762af6c07ba940027afad613f0402cee82 not found: ID does not exist" containerID="59e1594e8919827d3c011e7249304b762af6c07ba940027afad613f0402cee82" Apr 16 19:06:54.740922 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:06:54.740817 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59e1594e8919827d3c011e7249304b762af6c07ba940027afad613f0402cee82"} err="failed to get container status \"59e1594e8919827d3c011e7249304b762af6c07ba940027afad613f0402cee82\": rpc error: code = NotFound desc = could not find container \"59e1594e8919827d3c011e7249304b762af6c07ba940027afad613f0402cee82\": container with ID starting with 59e1594e8919827d3c011e7249304b762af6c07ba940027afad613f0402cee82 not found: ID does not exist" Apr 16 19:06:54.740922 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:06:54.740837 2569 scope.go:117] "RemoveContainer" containerID="aada645256295cd24b98275255a3ab33338802b6d0e1e891663ca45cdfc8b08b" Apr 16 19:06:54.741083 ip-10-0-132-14 kubenswrapper[2569]: E0416 19:06:54.741067 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aada645256295cd24b98275255a3ab33338802b6d0e1e891663ca45cdfc8b08b\": container with ID starting with aada645256295cd24b98275255a3ab33338802b6d0e1e891663ca45cdfc8b08b not found: ID does not exist" containerID="aada645256295cd24b98275255a3ab33338802b6d0e1e891663ca45cdfc8b08b" Apr 16 19:06:54.741119 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:06:54.741089 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aada645256295cd24b98275255a3ab33338802b6d0e1e891663ca45cdfc8b08b"} err="failed to get container status \"aada645256295cd24b98275255a3ab33338802b6d0e1e891663ca45cdfc8b08b\": rpc error: code = NotFound desc = could not find container \"aada645256295cd24b98275255a3ab33338802b6d0e1e891663ca45cdfc8b08b\": container with ID starting with aada645256295cd24b98275255a3ab33338802b6d0e1e891663ca45cdfc8b08b not found: ID does not exist" Apr 16 19:06:54.744081 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:06:54.744057 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-577fdc969f-d8sld"] Apr 16 19:06:56.648394 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:06:56.648362 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b74b66d-7537-4046-a91d-b8946eecbbbb" path="/var/lib/kubelet/pods/8b74b66d-7537-4046-a91d-b8946eecbbbb/volumes" Apr 16 19:08:29.519611 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:08:29.519571 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-predictive-sklearn-v2-predictor-7f8f9f49bf-jqkvj"] Apr 16 19:08:29.520087 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:08:29.519827 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="820ba194-3f0d-4fa0-b30c-28ce07a6bc34" containerName="kserve-container" Apr 16 19:08:29.520087 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:08:29.519844 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="820ba194-3f0d-4fa0-b30c-28ce07a6bc34" containerName="kserve-container" Apr 16 19:08:29.520087 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:08:29.519856 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8b74b66d-7537-4046-a91d-b8946eecbbbb" containerName="kserve-container" Apr 16 19:08:29.520087 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:08:29.519862 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b74b66d-7537-4046-a91d-b8946eecbbbb" containerName="kserve-container" Apr 16 19:08:29.520087 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:08:29.519870 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="820ba194-3f0d-4fa0-b30c-28ce07a6bc34" containerName="storage-initializer" Apr 16 19:08:29.520087 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:08:29.519876 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="820ba194-3f0d-4fa0-b30c-28ce07a6bc34" containerName="storage-initializer" Apr 16 19:08:29.520087 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:08:29.519888 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8b74b66d-7537-4046-a91d-b8946eecbbbb" containerName="storage-initializer" Apr 16 19:08:29.520087 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:08:29.519893 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b74b66d-7537-4046-a91d-b8946eecbbbb" containerName="storage-initializer" Apr 16 19:08:29.520087 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:08:29.519955 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="820ba194-3f0d-4fa0-b30c-28ce07a6bc34" containerName="kserve-container" Apr 16 19:08:29.520087 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:08:29.519963 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="8b74b66d-7537-4046-a91d-b8946eecbbbb" containerName="kserve-container" Apr 16 19:08:29.522667 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:08:29.522649 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-v2-predictor-7f8f9f49bf-jqkvj" Apr 16 19:08:29.524531 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:08:29.524510 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"default-dockercfg-z66mq\"" Apr 16 19:08:29.530809 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:08:29.530781 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-predictive-sklearn-v2-predictor-7f8f9f49bf-jqkvj"] Apr 16 19:08:29.630425 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:08:29.630392 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/249f4566-3bc9-4706-b0bd-f49a4d341470-kserve-provision-location\") pod \"isvc-predictive-sklearn-v2-predictor-7f8f9f49bf-jqkvj\" (UID: \"249f4566-3bc9-4706-b0bd-f49a4d341470\") " pod="kserve-ci-e2e-test/isvc-predictive-sklearn-v2-predictor-7f8f9f49bf-jqkvj" Apr 16 19:08:29.731546 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:08:29.731495 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/249f4566-3bc9-4706-b0bd-f49a4d341470-kserve-provision-location\") pod \"isvc-predictive-sklearn-v2-predictor-7f8f9f49bf-jqkvj\" (UID: \"249f4566-3bc9-4706-b0bd-f49a4d341470\") " pod="kserve-ci-e2e-test/isvc-predictive-sklearn-v2-predictor-7f8f9f49bf-jqkvj" Apr 16 19:08:29.731890 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:08:29.731869 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/249f4566-3bc9-4706-b0bd-f49a4d341470-kserve-provision-location\") pod \"isvc-predictive-sklearn-v2-predictor-7f8f9f49bf-jqkvj\" (UID: \"249f4566-3bc9-4706-b0bd-f49a4d341470\") " pod="kserve-ci-e2e-test/isvc-predictive-sklearn-v2-predictor-7f8f9f49bf-jqkvj" Apr 16 19:08:29.833267 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:08:29.833178 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-v2-predictor-7f8f9f49bf-jqkvj" Apr 16 19:08:29.949706 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:08:29.949663 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-predictive-sklearn-v2-predictor-7f8f9f49bf-jqkvj"] Apr 16 19:08:29.952405 ip-10-0-132-14 kubenswrapper[2569]: W0416 19:08:29.952378 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod249f4566_3bc9_4706_b0bd_f49a4d341470.slice/crio-6b909bcd2be4ece51a73b38d13255853b5b3e0b57599898065efcedbba5a77fb WatchSource:0}: Error finding container 6b909bcd2be4ece51a73b38d13255853b5b3e0b57599898065efcedbba5a77fb: Status 404 returned error can't find the container with id 6b909bcd2be4ece51a73b38d13255853b5b3e0b57599898065efcedbba5a77fb Apr 16 19:08:29.988614 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:08:29.988585 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-v2-predictor-7f8f9f49bf-jqkvj" event={"ID":"249f4566-3bc9-4706-b0bd-f49a4d341470","Type":"ContainerStarted","Data":"6b909bcd2be4ece51a73b38d13255853b5b3e0b57599898065efcedbba5a77fb"} Apr 16 19:08:30.992299 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:08:30.992263 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-v2-predictor-7f8f9f49bf-jqkvj" event={"ID":"249f4566-3bc9-4706-b0bd-f49a4d341470","Type":"ContainerStarted","Data":"81f5361d936f86fdca55783dc8cb910c5b07a574ace02e8ac93595381f403cd0"} Apr 16 19:08:34.002097 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:08:34.002057 2569 generic.go:358] "Generic (PLEG): container finished" podID="249f4566-3bc9-4706-b0bd-f49a4d341470" containerID="81f5361d936f86fdca55783dc8cb910c5b07a574ace02e8ac93595381f403cd0" exitCode=0 Apr 16 19:08:34.002527 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:08:34.002127 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-v2-predictor-7f8f9f49bf-jqkvj" event={"ID":"249f4566-3bc9-4706-b0bd-f49a4d341470","Type":"ContainerDied","Data":"81f5361d936f86fdca55783dc8cb910c5b07a574ace02e8ac93595381f403cd0"} Apr 16 19:08:35.006468 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:08:35.006433 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-v2-predictor-7f8f9f49bf-jqkvj" event={"ID":"249f4566-3bc9-4706-b0bd-f49a4d341470","Type":"ContainerStarted","Data":"92e81b8ecd7e440f139496477c465ead413fc9c0fd013422afe76fb00df05aef"} Apr 16 19:08:35.006875 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:08:35.006631 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-v2-predictor-7f8f9f49bf-jqkvj" Apr 16 19:08:35.026666 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:08:35.026609 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-v2-predictor-7f8f9f49bf-jqkvj" podStartSLOduration=6.026593778 podStartE2EDuration="6.026593778s" podCreationTimestamp="2026-04-16 19:08:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 19:08:35.020850867 +0000 UTC m=+2298.998934314" watchObservedRunningTime="2026-04-16 19:08:35.026593778 +0000 UTC m=+2299.004677230" Apr 16 19:09:06.011476 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:09:06.011391 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-v2-predictor-7f8f9f49bf-jqkvj" podUID="249f4566-3bc9-4706-b0bd-f49a4d341470" containerName="kserve-container" probeResult="failure" output="Get \"http://10.132.0.29:8080/v2/models/isvc-predictive-sklearn-v2/ready\": dial tcp 10.132.0.29:8080: connect: connection refused" Apr 16 19:09:16.010818 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:09:16.010775 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-v2-predictor-7f8f9f49bf-jqkvj" podUID="249f4566-3bc9-4706-b0bd-f49a4d341470" containerName="kserve-container" probeResult="failure" output="Get \"http://10.132.0.29:8080/v2/models/isvc-predictive-sklearn-v2/ready\": dial tcp 10.132.0.29:8080: connect: connection refused" Apr 16 19:09:26.010301 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:09:26.010259 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-v2-predictor-7f8f9f49bf-jqkvj" podUID="249f4566-3bc9-4706-b0bd-f49a4d341470" containerName="kserve-container" probeResult="failure" output="Get \"http://10.132.0.29:8080/v2/models/isvc-predictive-sklearn-v2/ready\": dial tcp 10.132.0.29:8080: connect: connection refused" Apr 16 19:09:36.010652 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:09:36.010604 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-v2-predictor-7f8f9f49bf-jqkvj" podUID="249f4566-3bc9-4706-b0bd-f49a4d341470" containerName="kserve-container" probeResult="failure" output="Get \"http://10.132.0.29:8080/v2/models/isvc-predictive-sklearn-v2/ready\": dial tcp 10.132.0.29:8080: connect: connection refused" Apr 16 19:09:43.644256 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:09:43.644216 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-v2-predictor-7f8f9f49bf-jqkvj" podUID="249f4566-3bc9-4706-b0bd-f49a4d341470" containerName="kserve-container" probeResult="failure" output="Get \"http://10.132.0.29:8080/v2/models/isvc-predictive-sklearn-v2/ready\": dial tcp 10.132.0.29:8080: connect: connection refused" Apr 16 19:09:53.647866 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:09:53.647824 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-v2-predictor-7f8f9f49bf-jqkvj" Apr 16 19:09:59.659898 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:09:59.659864 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-predictive-sklearn-v2-predictor-7f8f9f49bf-jqkvj"] Apr 16 19:09:59.660366 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:09:59.660235 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-v2-predictor-7f8f9f49bf-jqkvj" podUID="249f4566-3bc9-4706-b0bd-f49a4d341470" containerName="kserve-container" containerID="cri-o://92e81b8ecd7e440f139496477c465ead413fc9c0fd013422afe76fb00df05aef" gracePeriod=30 Apr 16 19:09:59.740470 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:09:59.740438 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-predictive-xgboost-v2-predictor-795b445d66-8mmk2"] Apr 16 19:09:59.743495 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:09:59.743473 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-v2-predictor-795b445d66-8mmk2" Apr 16 19:09:59.752438 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:09:59.752416 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-predictive-xgboost-v2-predictor-795b445d66-8mmk2"] Apr 16 19:09:59.900933 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:09:59.900889 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/0573d8be-fea5-4a44-b830-4e7e9ba8c01c-kserve-provision-location\") pod \"isvc-predictive-xgboost-v2-predictor-795b445d66-8mmk2\" (UID: \"0573d8be-fea5-4a44-b830-4e7e9ba8c01c\") " pod="kserve-ci-e2e-test/isvc-predictive-xgboost-v2-predictor-795b445d66-8mmk2" Apr 16 19:10:00.001844 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:10:00.001809 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/0573d8be-fea5-4a44-b830-4e7e9ba8c01c-kserve-provision-location\") pod \"isvc-predictive-xgboost-v2-predictor-795b445d66-8mmk2\" (UID: \"0573d8be-fea5-4a44-b830-4e7e9ba8c01c\") " pod="kserve-ci-e2e-test/isvc-predictive-xgboost-v2-predictor-795b445d66-8mmk2" Apr 16 19:10:00.002155 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:10:00.002132 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/0573d8be-fea5-4a44-b830-4e7e9ba8c01c-kserve-provision-location\") pod \"isvc-predictive-xgboost-v2-predictor-795b445d66-8mmk2\" (UID: \"0573d8be-fea5-4a44-b830-4e7e9ba8c01c\") " pod="kserve-ci-e2e-test/isvc-predictive-xgboost-v2-predictor-795b445d66-8mmk2" Apr 16 19:10:00.053982 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:10:00.053949 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-v2-predictor-795b445d66-8mmk2" Apr 16 19:10:00.166768 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:10:00.166738 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-predictive-xgboost-v2-predictor-795b445d66-8mmk2"] Apr 16 19:10:00.169943 ip-10-0-132-14 kubenswrapper[2569]: W0416 19:10:00.169912 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0573d8be_fea5_4a44_b830_4e7e9ba8c01c.slice/crio-f3f6d78c90965016df954a565a95933d53f6235e04efe8442e5bc89ef3d331c1 WatchSource:0}: Error finding container f3f6d78c90965016df954a565a95933d53f6235e04efe8442e5bc89ef3d331c1: Status 404 returned error can't find the container with id f3f6d78c90965016df954a565a95933d53f6235e04efe8442e5bc89ef3d331c1 Apr 16 19:10:00.174223 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:10:00.174203 2569 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 19:10:00.247437 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:10:00.247402 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-v2-predictor-795b445d66-8mmk2" event={"ID":"0573d8be-fea5-4a44-b830-4e7e9ba8c01c","Type":"ContainerStarted","Data":"f55c5a0b01de708e7c4a42768479b551c4356363f2c1deca3bd4e8d478fcce92"} Apr 16 19:10:00.247564 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:10:00.247440 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-v2-predictor-795b445d66-8mmk2" event={"ID":"0573d8be-fea5-4a44-b830-4e7e9ba8c01c","Type":"ContainerStarted","Data":"f3f6d78c90965016df954a565a95933d53f6235e04efe8442e5bc89ef3d331c1"} Apr 16 19:10:03.644211 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:10:03.644163 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-v2-predictor-7f8f9f49bf-jqkvj" podUID="249f4566-3bc9-4706-b0bd-f49a4d341470" containerName="kserve-container" probeResult="failure" output="Get \"http://10.132.0.29:8080/v2/models/isvc-predictive-sklearn-v2/ready\": dial tcp 10.132.0.29:8080: connect: connection refused" Apr 16 19:10:04.258580 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:10:04.258547 2569 generic.go:358] "Generic (PLEG): container finished" podID="0573d8be-fea5-4a44-b830-4e7e9ba8c01c" containerID="f55c5a0b01de708e7c4a42768479b551c4356363f2c1deca3bd4e8d478fcce92" exitCode=0 Apr 16 19:10:04.258769 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:10:04.258626 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-v2-predictor-795b445d66-8mmk2" event={"ID":"0573d8be-fea5-4a44-b830-4e7e9ba8c01c","Type":"ContainerDied","Data":"f55c5a0b01de708e7c4a42768479b551c4356363f2c1deca3bd4e8d478fcce92"} Apr 16 19:10:04.599660 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:10:04.599633 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-v2-predictor-7f8f9f49bf-jqkvj" Apr 16 19:10:04.736034 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:10:04.736003 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/249f4566-3bc9-4706-b0bd-f49a4d341470-kserve-provision-location\") pod \"249f4566-3bc9-4706-b0bd-f49a4d341470\" (UID: \"249f4566-3bc9-4706-b0bd-f49a4d341470\") " Apr 16 19:10:04.736419 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:10:04.736315 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/249f4566-3bc9-4706-b0bd-f49a4d341470-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "249f4566-3bc9-4706-b0bd-f49a4d341470" (UID: "249f4566-3bc9-4706-b0bd-f49a4d341470"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 19:10:04.837182 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:10:04.837096 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/249f4566-3bc9-4706-b0bd-f49a4d341470-kserve-provision-location\") on node \"ip-10-0-132-14.ec2.internal\" DevicePath \"\"" Apr 16 19:10:05.262731 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:10:05.262698 2569 generic.go:358] "Generic (PLEG): container finished" podID="249f4566-3bc9-4706-b0bd-f49a4d341470" containerID="92e81b8ecd7e440f139496477c465ead413fc9c0fd013422afe76fb00df05aef" exitCode=0 Apr 16 19:10:05.262932 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:10:05.262779 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-v2-predictor-7f8f9f49bf-jqkvj" Apr 16 19:10:05.262932 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:10:05.262784 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-v2-predictor-7f8f9f49bf-jqkvj" event={"ID":"249f4566-3bc9-4706-b0bd-f49a4d341470","Type":"ContainerDied","Data":"92e81b8ecd7e440f139496477c465ead413fc9c0fd013422afe76fb00df05aef"} Apr 16 19:10:05.262932 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:10:05.262833 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-predictive-sklearn-v2-predictor-7f8f9f49bf-jqkvj" event={"ID":"249f4566-3bc9-4706-b0bd-f49a4d341470","Type":"ContainerDied","Data":"6b909bcd2be4ece51a73b38d13255853b5b3e0b57599898065efcedbba5a77fb"} Apr 16 19:10:05.262932 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:10:05.262857 2569 scope.go:117] "RemoveContainer" containerID="92e81b8ecd7e440f139496477c465ead413fc9c0fd013422afe76fb00df05aef" Apr 16 19:10:05.264542 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:10:05.264519 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-v2-predictor-795b445d66-8mmk2" event={"ID":"0573d8be-fea5-4a44-b830-4e7e9ba8c01c","Type":"ContainerStarted","Data":"b1386e377a8816daec7d8053797b5b532f3dab0cbc4fae9d45472370bb0da9ce"} Apr 16 19:10:05.264730 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:10:05.264716 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-v2-predictor-795b445d66-8mmk2" Apr 16 19:10:05.271002 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:10:05.270964 2569 scope.go:117] "RemoveContainer" containerID="81f5361d936f86fdca55783dc8cb910c5b07a574ace02e8ac93595381f403cd0" Apr 16 19:10:05.278318 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:10:05.278303 2569 scope.go:117] "RemoveContainer" containerID="92e81b8ecd7e440f139496477c465ead413fc9c0fd013422afe76fb00df05aef" Apr 16 19:10:05.278607 ip-10-0-132-14 kubenswrapper[2569]: E0416 19:10:05.278588 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92e81b8ecd7e440f139496477c465ead413fc9c0fd013422afe76fb00df05aef\": container with ID starting with 92e81b8ecd7e440f139496477c465ead413fc9c0fd013422afe76fb00df05aef not found: ID does not exist" containerID="92e81b8ecd7e440f139496477c465ead413fc9c0fd013422afe76fb00df05aef" Apr 16 19:10:05.278736 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:10:05.278621 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92e81b8ecd7e440f139496477c465ead413fc9c0fd013422afe76fb00df05aef"} err="failed to get container status \"92e81b8ecd7e440f139496477c465ead413fc9c0fd013422afe76fb00df05aef\": rpc error: code = NotFound desc = could not find container \"92e81b8ecd7e440f139496477c465ead413fc9c0fd013422afe76fb00df05aef\": container with ID starting with 92e81b8ecd7e440f139496477c465ead413fc9c0fd013422afe76fb00df05aef not found: ID does not exist" Apr 16 19:10:05.278736 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:10:05.278646 2569 scope.go:117] "RemoveContainer" containerID="81f5361d936f86fdca55783dc8cb910c5b07a574ace02e8ac93595381f403cd0" Apr 16 19:10:05.278926 ip-10-0-132-14 kubenswrapper[2569]: E0416 19:10:05.278909 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81f5361d936f86fdca55783dc8cb910c5b07a574ace02e8ac93595381f403cd0\": container with ID starting with 81f5361d936f86fdca55783dc8cb910c5b07a574ace02e8ac93595381f403cd0 not found: ID does not exist" containerID="81f5361d936f86fdca55783dc8cb910c5b07a574ace02e8ac93595381f403cd0" Apr 16 19:10:05.278961 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:10:05.278932 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81f5361d936f86fdca55783dc8cb910c5b07a574ace02e8ac93595381f403cd0"} err="failed to get container status \"81f5361d936f86fdca55783dc8cb910c5b07a574ace02e8ac93595381f403cd0\": rpc error: code = NotFound desc = could not find container \"81f5361d936f86fdca55783dc8cb910c5b07a574ace02e8ac93595381f403cd0\": container with ID starting with 81f5361d936f86fdca55783dc8cb910c5b07a574ace02e8ac93595381f403cd0 not found: ID does not exist" Apr 16 19:10:05.284193 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:10:05.284152 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-v2-predictor-795b445d66-8mmk2" podStartSLOduration=6.284141051 podStartE2EDuration="6.284141051s" podCreationTimestamp="2026-04-16 19:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 19:10:05.282100076 +0000 UTC m=+2389.260183522" watchObservedRunningTime="2026-04-16 19:10:05.284141051 +0000 UTC m=+2389.262224496" Apr 16 19:10:05.293934 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:10:05.293910 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-predictive-sklearn-v2-predictor-7f8f9f49bf-jqkvj"] Apr 16 19:10:05.296833 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:10:05.296814 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-predictive-sklearn-v2-predictor-7f8f9f49bf-jqkvj"] Apr 16 19:10:06.647115 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:10:06.647084 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="249f4566-3bc9-4706-b0bd-f49a4d341470" path="/var/lib/kubelet/pods/249f4566-3bc9-4706-b0bd-f49a4d341470/volumes" Apr 16 19:10:36.270329 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:10:36.270245 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-v2-predictor-795b445d66-8mmk2" podUID="0573d8be-fea5-4a44-b830-4e7e9ba8c01c" containerName="kserve-container" probeResult="failure" output="Get \"http://10.132.0.30:8080/v2/models/isvc-predictive-xgboost-v2/ready\": dial tcp 10.132.0.30:8080: connect: connection refused" Apr 16 19:10:46.269725 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:10:46.269676 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-v2-predictor-795b445d66-8mmk2" podUID="0573d8be-fea5-4a44-b830-4e7e9ba8c01c" containerName="kserve-container" probeResult="failure" output="Get \"http://10.132.0.30:8080/v2/models/isvc-predictive-xgboost-v2/ready\": dial tcp 10.132.0.30:8080: connect: connection refused" Apr 16 19:10:56.269518 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:10:56.269473 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-v2-predictor-795b445d66-8mmk2" podUID="0573d8be-fea5-4a44-b830-4e7e9ba8c01c" containerName="kserve-container" probeResult="failure" output="Get \"http://10.132.0.30:8080/v2/models/isvc-predictive-xgboost-v2/ready\": dial tcp 10.132.0.30:8080: connect: connection refused" Apr 16 19:11:06.269712 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:06.269667 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-v2-predictor-795b445d66-8mmk2" podUID="0573d8be-fea5-4a44-b830-4e7e9ba8c01c" containerName="kserve-container" probeResult="failure" output="Get \"http://10.132.0.30:8080/v2/models/isvc-predictive-xgboost-v2/ready\": dial tcp 10.132.0.30:8080: connect: connection refused" Apr 16 19:11:16.269045 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:16.268999 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-v2-predictor-795b445d66-8mmk2" podUID="0573d8be-fea5-4a44-b830-4e7e9ba8c01c" containerName="kserve-container" probeResult="failure" output="Get \"http://10.132.0.30:8080/v2/models/isvc-predictive-xgboost-v2/ready\": dial tcp 10.132.0.30:8080: connect: connection refused" Apr 16 19:11:19.648399 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:19.648368 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-v2-predictor-795b445d66-8mmk2" Apr 16 19:11:29.879951 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:29.879915 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-predictive-xgboost-v2-predictor-795b445d66-8mmk2"] Apr 16 19:11:29.880446 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:29.880256 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-v2-predictor-795b445d66-8mmk2" podUID="0573d8be-fea5-4a44-b830-4e7e9ba8c01c" containerName="kserve-container" containerID="cri-o://b1386e377a8816daec7d8053797b5b532f3dab0cbc4fae9d45472370bb0da9ce" gracePeriod=30 Apr 16 19:11:29.990530 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:29.990491 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-predictive-lightgbm-v2-predictor-665ddb84b7-fs2pr"] Apr 16 19:11:29.990801 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:29.990788 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="249f4566-3bc9-4706-b0bd-f49a4d341470" containerName="kserve-container" Apr 16 19:11:29.990844 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:29.990805 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="249f4566-3bc9-4706-b0bd-f49a4d341470" containerName="kserve-container" Apr 16 19:11:29.990844 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:29.990816 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="249f4566-3bc9-4706-b0bd-f49a4d341470" containerName="storage-initializer" Apr 16 19:11:29.990844 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:29.990822 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="249f4566-3bc9-4706-b0bd-f49a4d341470" containerName="storage-initializer" Apr 16 19:11:29.990935 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:29.990863 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="249f4566-3bc9-4706-b0bd-f49a4d341470" containerName="kserve-container" Apr 16 19:11:29.993533 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:29.993515 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-predictive-lightgbm-v2-predictor-665ddb84b7-fs2pr" Apr 16 19:11:30.001810 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:30.001781 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-predictive-lightgbm-v2-predictor-665ddb84b7-fs2pr"] Apr 16 19:11:30.180910 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:30.180826 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/eefa1ecf-5916-4925-b7e9-abc4ec426a73-kserve-provision-location\") pod \"isvc-predictive-lightgbm-v2-predictor-665ddb84b7-fs2pr\" (UID: \"eefa1ecf-5916-4925-b7e9-abc4ec426a73\") " pod="kserve-ci-e2e-test/isvc-predictive-lightgbm-v2-predictor-665ddb84b7-fs2pr" Apr 16 19:11:30.281286 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:30.281232 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/eefa1ecf-5916-4925-b7e9-abc4ec426a73-kserve-provision-location\") pod \"isvc-predictive-lightgbm-v2-predictor-665ddb84b7-fs2pr\" (UID: \"eefa1ecf-5916-4925-b7e9-abc4ec426a73\") " pod="kserve-ci-e2e-test/isvc-predictive-lightgbm-v2-predictor-665ddb84b7-fs2pr" Apr 16 19:11:30.281657 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:30.281639 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/eefa1ecf-5916-4925-b7e9-abc4ec426a73-kserve-provision-location\") pod \"isvc-predictive-lightgbm-v2-predictor-665ddb84b7-fs2pr\" (UID: \"eefa1ecf-5916-4925-b7e9-abc4ec426a73\") " pod="kserve-ci-e2e-test/isvc-predictive-lightgbm-v2-predictor-665ddb84b7-fs2pr" Apr 16 19:11:30.303729 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:30.303706 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-predictive-lightgbm-v2-predictor-665ddb84b7-fs2pr" Apr 16 19:11:30.427972 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:30.427817 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-predictive-lightgbm-v2-predictor-665ddb84b7-fs2pr"] Apr 16 19:11:30.430121 ip-10-0-132-14 kubenswrapper[2569]: W0416 19:11:30.430098 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeefa1ecf_5916_4925_b7e9_abc4ec426a73.slice/crio-f48f9ae1affef73d47b20a209b16ce14897b35fe32ce903ba34615de26aa9f08 WatchSource:0}: Error finding container f48f9ae1affef73d47b20a209b16ce14897b35fe32ce903ba34615de26aa9f08: Status 404 returned error can't find the container with id f48f9ae1affef73d47b20a209b16ce14897b35fe32ce903ba34615de26aa9f08 Apr 16 19:11:30.499407 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:30.499372 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-predictive-lightgbm-v2-predictor-665ddb84b7-fs2pr" event={"ID":"eefa1ecf-5916-4925-b7e9-abc4ec426a73","Type":"ContainerStarted","Data":"f81a895d259d04565020c37b08c0559f89a90d7139b764e6d7f0e5ec20e7b3b9"} Apr 16 19:11:30.499537 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:30.499416 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-predictive-lightgbm-v2-predictor-665ddb84b7-fs2pr" event={"ID":"eefa1ecf-5916-4925-b7e9-abc4ec426a73","Type":"ContainerStarted","Data":"f48f9ae1affef73d47b20a209b16ce14897b35fe32ce903ba34615de26aa9f08"} Apr 16 19:11:34.510625 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:34.510461 2569 generic.go:358] "Generic (PLEG): container finished" podID="0573d8be-fea5-4a44-b830-4e7e9ba8c01c" containerID="b1386e377a8816daec7d8053797b5b532f3dab0cbc4fae9d45472370bb0da9ce" exitCode=0 Apr 16 19:11:34.510625 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:34.510538 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-v2-predictor-795b445d66-8mmk2" event={"ID":"0573d8be-fea5-4a44-b830-4e7e9ba8c01c","Type":"ContainerDied","Data":"b1386e377a8816daec7d8053797b5b532f3dab0cbc4fae9d45472370bb0da9ce"} Apr 16 19:11:34.511832 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:34.511810 2569 generic.go:358] "Generic (PLEG): container finished" podID="eefa1ecf-5916-4925-b7e9-abc4ec426a73" containerID="f81a895d259d04565020c37b08c0559f89a90d7139b764e6d7f0e5ec20e7b3b9" exitCode=0 Apr 16 19:11:34.511958 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:34.511870 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-predictive-lightgbm-v2-predictor-665ddb84b7-fs2pr" event={"ID":"eefa1ecf-5916-4925-b7e9-abc4ec426a73","Type":"ContainerDied","Data":"f81a895d259d04565020c37b08c0559f89a90d7139b764e6d7f0e5ec20e7b3b9"} Apr 16 19:11:34.526903 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:34.526879 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-v2-predictor-795b445d66-8mmk2" Apr 16 19:11:34.714208 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:34.714173 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/0573d8be-fea5-4a44-b830-4e7e9ba8c01c-kserve-provision-location\") pod \"0573d8be-fea5-4a44-b830-4e7e9ba8c01c\" (UID: \"0573d8be-fea5-4a44-b830-4e7e9ba8c01c\") " Apr 16 19:11:34.714537 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:34.714511 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0573d8be-fea5-4a44-b830-4e7e9ba8c01c-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "0573d8be-fea5-4a44-b830-4e7e9ba8c01c" (UID: "0573d8be-fea5-4a44-b830-4e7e9ba8c01c"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 19:11:34.815180 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:34.815091 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/0573d8be-fea5-4a44-b830-4e7e9ba8c01c-kserve-provision-location\") on node \"ip-10-0-132-14.ec2.internal\" DevicePath \"\"" Apr 16 19:11:35.517024 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:35.516978 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-v2-predictor-795b445d66-8mmk2" event={"ID":"0573d8be-fea5-4a44-b830-4e7e9ba8c01c","Type":"ContainerDied","Data":"f3f6d78c90965016df954a565a95933d53f6235e04efe8442e5bc89ef3d331c1"} Apr 16 19:11:35.517024 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:35.517008 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-predictive-xgboost-v2-predictor-795b445d66-8mmk2" Apr 16 19:11:35.517566 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:35.517040 2569 scope.go:117] "RemoveContainer" containerID="b1386e377a8816daec7d8053797b5b532f3dab0cbc4fae9d45472370bb0da9ce" Apr 16 19:11:35.518616 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:35.518587 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-predictive-lightgbm-v2-predictor-665ddb84b7-fs2pr" event={"ID":"eefa1ecf-5916-4925-b7e9-abc4ec426a73","Type":"ContainerStarted","Data":"d761f4a4d9ac66be278abc2c4e45b7e28b9f5a679dd9fc0aba5f4450feaf4230"} Apr 16 19:11:35.518838 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:35.518822 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-predictive-lightgbm-v2-predictor-665ddb84b7-fs2pr" Apr 16 19:11:35.525116 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:35.524990 2569 scope.go:117] "RemoveContainer" containerID="f55c5a0b01de708e7c4a42768479b551c4356363f2c1deca3bd4e8d478fcce92" Apr 16 19:11:35.536004 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:35.535954 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-predictive-lightgbm-v2-predictor-665ddb84b7-fs2pr" podStartSLOduration=6.535938759 podStartE2EDuration="6.535938759s" podCreationTimestamp="2026-04-16 19:11:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 19:11:35.534447855 +0000 UTC m=+2479.512531297" watchObservedRunningTime="2026-04-16 19:11:35.535938759 +0000 UTC m=+2479.514022205" Apr 16 19:11:35.546524 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:35.546492 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-predictive-xgboost-v2-predictor-795b445d66-8mmk2"] Apr 16 19:11:35.552162 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:35.552137 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-predictive-xgboost-v2-predictor-795b445d66-8mmk2"] Apr 16 19:11:36.647561 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:11:36.647531 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0573d8be-fea5-4a44-b830-4e7e9ba8c01c" path="/var/lib/kubelet/pods/0573d8be-fea5-4a44-b830-4e7e9ba8c01c/volumes" Apr 16 19:12:06.523976 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:12:06.523893 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-predictive-lightgbm-v2-predictor-665ddb84b7-fs2pr" podUID="eefa1ecf-5916-4925-b7e9-abc4ec426a73" containerName="kserve-container" probeResult="failure" output="Get \"http://10.132.0.31:8080/v2/models/isvc-predictive-lightgbm-v2/ready\": dial tcp 10.132.0.31:8080: connect: connection refused" Apr 16 19:12:16.522235 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:12:16.522183 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-predictive-lightgbm-v2-predictor-665ddb84b7-fs2pr" podUID="eefa1ecf-5916-4925-b7e9-abc4ec426a73" containerName="kserve-container" probeResult="failure" output="Get \"http://10.132.0.31:8080/v2/models/isvc-predictive-lightgbm-v2/ready\": dial tcp 10.132.0.31:8080: connect: connection refused" Apr 16 19:12:26.522906 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:12:26.522859 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-predictive-lightgbm-v2-predictor-665ddb84b7-fs2pr" podUID="eefa1ecf-5916-4925-b7e9-abc4ec426a73" containerName="kserve-container" probeResult="failure" output="Get \"http://10.132.0.31:8080/v2/models/isvc-predictive-lightgbm-v2/ready\": dial tcp 10.132.0.31:8080: connect: connection refused" Apr 16 19:12:36.522604 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:12:36.522559 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-predictive-lightgbm-v2-predictor-665ddb84b7-fs2pr" podUID="eefa1ecf-5916-4925-b7e9-abc4ec426a73" containerName="kserve-container" probeResult="failure" output="Get \"http://10.132.0.31:8080/v2/models/isvc-predictive-lightgbm-v2/ready\": dial tcp 10.132.0.31:8080: connect: connection refused" Apr 16 19:12:41.644612 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:12:41.644569 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-predictive-lightgbm-v2-predictor-665ddb84b7-fs2pr" podUID="eefa1ecf-5916-4925-b7e9-abc4ec426a73" containerName="kserve-container" probeResult="failure" output="Get \"http://10.132.0.31:8080/v2/models/isvc-predictive-lightgbm-v2/ready\": dial tcp 10.132.0.31:8080: connect: connection refused" Apr 16 19:12:51.648844 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:12:51.648803 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-predictive-lightgbm-v2-predictor-665ddb84b7-fs2pr" Apr 16 19:13:00.125782 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:13:00.125748 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-predictive-lightgbm-v2-predictor-665ddb84b7-fs2pr"] Apr 16 19:13:00.126274 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:13:00.126024 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-predictive-lightgbm-v2-predictor-665ddb84b7-fs2pr" podUID="eefa1ecf-5916-4925-b7e9-abc4ec426a73" containerName="kserve-container" containerID="cri-o://d761f4a4d9ac66be278abc2c4e45b7e28b9f5a679dd9fc0aba5f4450feaf4230" gracePeriod=30 Apr 16 19:13:01.644812 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:13:01.644770 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-predictive-lightgbm-v2-predictor-665ddb84b7-fs2pr" podUID="eefa1ecf-5916-4925-b7e9-abc4ec426a73" containerName="kserve-container" probeResult="failure" output="Get \"http://10.132.0.31:8080/v2/models/isvc-predictive-lightgbm-v2/ready\": dial tcp 10.132.0.31:8080: connect: connection refused" Apr 16 19:13:05.367942 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:13:05.367920 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-predictive-lightgbm-v2-predictor-665ddb84b7-fs2pr" Apr 16 19:13:05.465533 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:13:05.465494 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/eefa1ecf-5916-4925-b7e9-abc4ec426a73-kserve-provision-location\") pod \"eefa1ecf-5916-4925-b7e9-abc4ec426a73\" (UID: \"eefa1ecf-5916-4925-b7e9-abc4ec426a73\") " Apr 16 19:13:05.465837 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:13:05.465813 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eefa1ecf-5916-4925-b7e9-abc4ec426a73-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "eefa1ecf-5916-4925-b7e9-abc4ec426a73" (UID: "eefa1ecf-5916-4925-b7e9-abc4ec426a73"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 19:13:05.566548 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:13:05.566458 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/eefa1ecf-5916-4925-b7e9-abc4ec426a73-kserve-provision-location\") on node \"ip-10-0-132-14.ec2.internal\" DevicePath \"\"" Apr 16 19:13:05.769834 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:13:05.769801 2569 generic.go:358] "Generic (PLEG): container finished" podID="eefa1ecf-5916-4925-b7e9-abc4ec426a73" containerID="d761f4a4d9ac66be278abc2c4e45b7e28b9f5a679dd9fc0aba5f4450feaf4230" exitCode=0 Apr 16 19:13:05.769998 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:13:05.769871 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-predictive-lightgbm-v2-predictor-665ddb84b7-fs2pr" Apr 16 19:13:05.769998 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:13:05.769882 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-predictive-lightgbm-v2-predictor-665ddb84b7-fs2pr" event={"ID":"eefa1ecf-5916-4925-b7e9-abc4ec426a73","Type":"ContainerDied","Data":"d761f4a4d9ac66be278abc2c4e45b7e28b9f5a679dd9fc0aba5f4450feaf4230"} Apr 16 19:13:05.769998 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:13:05.769931 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-predictive-lightgbm-v2-predictor-665ddb84b7-fs2pr" event={"ID":"eefa1ecf-5916-4925-b7e9-abc4ec426a73","Type":"ContainerDied","Data":"f48f9ae1affef73d47b20a209b16ce14897b35fe32ce903ba34615de26aa9f08"} Apr 16 19:13:05.769998 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:13:05.769950 2569 scope.go:117] "RemoveContainer" containerID="d761f4a4d9ac66be278abc2c4e45b7e28b9f5a679dd9fc0aba5f4450feaf4230" Apr 16 19:13:05.778253 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:13:05.778233 2569 scope.go:117] "RemoveContainer" containerID="f81a895d259d04565020c37b08c0559f89a90d7139b764e6d7f0e5ec20e7b3b9" Apr 16 19:13:05.785240 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:13:05.785223 2569 scope.go:117] "RemoveContainer" containerID="d761f4a4d9ac66be278abc2c4e45b7e28b9f5a679dd9fc0aba5f4450feaf4230" Apr 16 19:13:05.785506 ip-10-0-132-14 kubenswrapper[2569]: E0416 19:13:05.785487 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d761f4a4d9ac66be278abc2c4e45b7e28b9f5a679dd9fc0aba5f4450feaf4230\": container with ID starting with d761f4a4d9ac66be278abc2c4e45b7e28b9f5a679dd9fc0aba5f4450feaf4230 not found: ID does not exist" containerID="d761f4a4d9ac66be278abc2c4e45b7e28b9f5a679dd9fc0aba5f4450feaf4230" Apr 16 19:13:05.785572 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:13:05.785516 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d761f4a4d9ac66be278abc2c4e45b7e28b9f5a679dd9fc0aba5f4450feaf4230"} err="failed to get container status \"d761f4a4d9ac66be278abc2c4e45b7e28b9f5a679dd9fc0aba5f4450feaf4230\": rpc error: code = NotFound desc = could not find container \"d761f4a4d9ac66be278abc2c4e45b7e28b9f5a679dd9fc0aba5f4450feaf4230\": container with ID starting with d761f4a4d9ac66be278abc2c4e45b7e28b9f5a679dd9fc0aba5f4450feaf4230 not found: ID does not exist" Apr 16 19:13:05.785572 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:13:05.785535 2569 scope.go:117] "RemoveContainer" containerID="f81a895d259d04565020c37b08c0559f89a90d7139b764e6d7f0e5ec20e7b3b9" Apr 16 19:13:05.785765 ip-10-0-132-14 kubenswrapper[2569]: E0416 19:13:05.785748 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f81a895d259d04565020c37b08c0559f89a90d7139b764e6d7f0e5ec20e7b3b9\": container with ID starting with f81a895d259d04565020c37b08c0559f89a90d7139b764e6d7f0e5ec20e7b3b9 not found: ID does not exist" containerID="f81a895d259d04565020c37b08c0559f89a90d7139b764e6d7f0e5ec20e7b3b9" Apr 16 19:13:05.785805 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:13:05.785772 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f81a895d259d04565020c37b08c0559f89a90d7139b764e6d7f0e5ec20e7b3b9"} err="failed to get container status \"f81a895d259d04565020c37b08c0559f89a90d7139b764e6d7f0e5ec20e7b3b9\": rpc error: code = NotFound desc = could not find container \"f81a895d259d04565020c37b08c0559f89a90d7139b764e6d7f0e5ec20e7b3b9\": container with ID starting with f81a895d259d04565020c37b08c0559f89a90d7139b764e6d7f0e5ec20e7b3b9 not found: ID does not exist" Apr 16 19:13:05.789501 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:13:05.789479 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-predictive-lightgbm-v2-predictor-665ddb84b7-fs2pr"] Apr 16 19:13:05.792288 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:13:05.792266 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-predictive-lightgbm-v2-predictor-665ddb84b7-fs2pr"] Apr 16 19:13:06.648684 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:13:06.648652 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eefa1ecf-5916-4925-b7e9-abc4ec426a73" path="/var/lib/kubelet/pods/eefa1ecf-5916-4925-b7e9-abc4ec426a73/volumes" Apr 16 19:19:20.593592 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:19:20.593563 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-tensorflow-predictor-864f6b7649-8csrr"] Apr 16 19:19:20.594071 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:19:20.593858 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0573d8be-fea5-4a44-b830-4e7e9ba8c01c" containerName="storage-initializer" Apr 16 19:19:20.594071 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:19:20.593873 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="0573d8be-fea5-4a44-b830-4e7e9ba8c01c" containerName="storage-initializer" Apr 16 19:19:20.594071 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:19:20.593882 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="eefa1ecf-5916-4925-b7e9-abc4ec426a73" containerName="kserve-container" Apr 16 19:19:20.594071 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:19:20.593890 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="eefa1ecf-5916-4925-b7e9-abc4ec426a73" containerName="kserve-container" Apr 16 19:19:20.594071 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:19:20.593898 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="eefa1ecf-5916-4925-b7e9-abc4ec426a73" containerName="storage-initializer" Apr 16 19:19:20.594071 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:19:20.593904 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="eefa1ecf-5916-4925-b7e9-abc4ec426a73" containerName="storage-initializer" Apr 16 19:19:20.594071 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:19:20.593913 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0573d8be-fea5-4a44-b830-4e7e9ba8c01c" containerName="kserve-container" Apr 16 19:19:20.594071 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:19:20.593918 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="0573d8be-fea5-4a44-b830-4e7e9ba8c01c" containerName="kserve-container" Apr 16 19:19:20.594071 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:19:20.593967 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="0573d8be-fea5-4a44-b830-4e7e9ba8c01c" containerName="kserve-container" Apr 16 19:19:20.594071 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:19:20.593975 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="eefa1ecf-5916-4925-b7e9-abc4ec426a73" containerName="kserve-container" Apr 16 19:19:20.596849 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:19:20.596828 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-tensorflow-predictor-864f6b7649-8csrr" Apr 16 19:19:20.598517 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:19:20.598496 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"default-dockercfg-z66mq\"" Apr 16 19:19:20.603155 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:19:20.603122 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-tensorflow-predictor-864f6b7649-8csrr"] Apr 16 19:19:20.636869 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:19:20.636840 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/3e5ed365-b060-452e-9573-6abcefbf2135-kserve-provision-location\") pod \"isvc-tensorflow-predictor-864f6b7649-8csrr\" (UID: \"3e5ed365-b060-452e-9573-6abcefbf2135\") " pod="kserve-ci-e2e-test/isvc-tensorflow-predictor-864f6b7649-8csrr" Apr 16 19:19:20.737949 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:19:20.737914 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/3e5ed365-b060-452e-9573-6abcefbf2135-kserve-provision-location\") pod \"isvc-tensorflow-predictor-864f6b7649-8csrr\" (UID: \"3e5ed365-b060-452e-9573-6abcefbf2135\") " pod="kserve-ci-e2e-test/isvc-tensorflow-predictor-864f6b7649-8csrr" Apr 16 19:19:20.738289 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:19:20.738268 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/3e5ed365-b060-452e-9573-6abcefbf2135-kserve-provision-location\") pod \"isvc-tensorflow-predictor-864f6b7649-8csrr\" (UID: \"3e5ed365-b060-452e-9573-6abcefbf2135\") " pod="kserve-ci-e2e-test/isvc-tensorflow-predictor-864f6b7649-8csrr" Apr 16 19:19:20.907590 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:19:20.907435 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-tensorflow-predictor-864f6b7649-8csrr" Apr 16 19:19:21.027639 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:19:21.027607 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-tensorflow-predictor-864f6b7649-8csrr"] Apr 16 19:19:21.030148 ip-10-0-132-14 kubenswrapper[2569]: W0416 19:19:21.030122 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e5ed365_b060_452e_9573_6abcefbf2135.slice/crio-2452632042bb41fc1920f91c2a97e077db4f8e0d02c5b7295421986a43f95342 WatchSource:0}: Error finding container 2452632042bb41fc1920f91c2a97e077db4f8e0d02c5b7295421986a43f95342: Status 404 returned error can't find the container with id 2452632042bb41fc1920f91c2a97e077db4f8e0d02c5b7295421986a43f95342 Apr 16 19:19:21.031932 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:19:21.031916 2569 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 19:19:21.812147 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:19:21.812108 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-tensorflow-predictor-864f6b7649-8csrr" event={"ID":"3e5ed365-b060-452e-9573-6abcefbf2135","Type":"ContainerStarted","Data":"f8d9c44eb071c548cf6074cf035c1739c4b23816bcf542f49586cd43ed2a4df6"} Apr 16 19:19:21.812147 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:19:21.812151 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-tensorflow-predictor-864f6b7649-8csrr" event={"ID":"3e5ed365-b060-452e-9573-6abcefbf2135","Type":"ContainerStarted","Data":"2452632042bb41fc1920f91c2a97e077db4f8e0d02c5b7295421986a43f95342"} Apr 16 19:19:25.824221 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:19:25.824130 2569 generic.go:358] "Generic (PLEG): container finished" podID="3e5ed365-b060-452e-9573-6abcefbf2135" containerID="f8d9c44eb071c548cf6074cf035c1739c4b23816bcf542f49586cd43ed2a4df6" exitCode=0 Apr 16 19:19:25.824712 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:19:25.824209 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-tensorflow-predictor-864f6b7649-8csrr" event={"ID":"3e5ed365-b060-452e-9573-6abcefbf2135","Type":"ContainerDied","Data":"f8d9c44eb071c548cf6074cf035c1739c4b23816bcf542f49586cd43ed2a4df6"} Apr 16 19:19:29.840824 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:19:29.840786 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-tensorflow-predictor-864f6b7649-8csrr" event={"ID":"3e5ed365-b060-452e-9573-6abcefbf2135","Type":"ContainerStarted","Data":"2f0c0dece11493fdce2f40423317340335989f5f3c91075572bd9e866e77b1b6"} Apr 16 19:19:29.841173 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:19:29.841079 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-tensorflow-predictor-864f6b7649-8csrr" Apr 16 19:19:29.842276 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:19:29.842249 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-tensorflow-predictor-864f6b7649-8csrr" podUID="3e5ed365-b060-452e-9573-6abcefbf2135" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.32:8080: connect: connection refused" Apr 16 19:19:29.869206 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:19:29.869152 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-tensorflow-predictor-864f6b7649-8csrr" podStartSLOduration=5.955240184 podStartE2EDuration="9.869138126s" podCreationTimestamp="2026-04-16 19:19:20 +0000 UTC" firstStartedPulling="2026-04-16 19:19:25.825499833 +0000 UTC m=+2949.803583257" lastFinishedPulling="2026-04-16 19:19:29.739397764 +0000 UTC m=+2953.717481199" observedRunningTime="2026-04-16 19:19:29.867652464 +0000 UTC m=+2953.845735919" watchObservedRunningTime="2026-04-16 19:19:29.869138126 +0000 UTC m=+2953.847221573" Apr 16 19:19:30.843944 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:19:30.843909 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-tensorflow-predictor-864f6b7649-8csrr" podUID="3e5ed365-b060-452e-9573-6abcefbf2135" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.32:8080: connect: connection refused" Apr 16 19:19:40.844983 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:19:40.844884 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-tensorflow-predictor-864f6b7649-8csrr" podUID="3e5ed365-b060-452e-9573-6abcefbf2135" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.32:8080: connect: connection refused" Apr 16 19:19:50.845199 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:19:50.845158 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-tensorflow-predictor-864f6b7649-8csrr" Apr 16 19:20:11.634972 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:20:11.634935 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-tensorflow-predictor-864f6b7649-8csrr"] Apr 16 19:20:11.635581 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:20:11.635213 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-tensorflow-predictor-864f6b7649-8csrr" podUID="3e5ed365-b060-452e-9573-6abcefbf2135" containerName="kserve-container" containerID="cri-o://2f0c0dece11493fdce2f40423317340335989f5f3c91075572bd9e866e77b1b6" gracePeriod=30 Apr 16 19:20:11.680915 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:20:11.680881 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-tensorflow-runtime-predictor-7446cd7bb-sc7js"] Apr 16 19:20:11.683877 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:20:11.683861 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-tensorflow-runtime-predictor-7446cd7bb-sc7js" Apr 16 19:20:11.692868 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:20:11.692840 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-tensorflow-runtime-predictor-7446cd7bb-sc7js"] Apr 16 19:20:11.805532 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:20:11.805487 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/29ede669-005b-4817-94f4-186169eb6499-kserve-provision-location\") pod \"isvc-tensorflow-runtime-predictor-7446cd7bb-sc7js\" (UID: \"29ede669-005b-4817-94f4-186169eb6499\") " pod="kserve-ci-e2e-test/isvc-tensorflow-runtime-predictor-7446cd7bb-sc7js" Apr 16 19:20:11.906269 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:20:11.906161 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/29ede669-005b-4817-94f4-186169eb6499-kserve-provision-location\") pod \"isvc-tensorflow-runtime-predictor-7446cd7bb-sc7js\" (UID: \"29ede669-005b-4817-94f4-186169eb6499\") " pod="kserve-ci-e2e-test/isvc-tensorflow-runtime-predictor-7446cd7bb-sc7js" Apr 16 19:20:11.906599 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:20:11.906575 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/29ede669-005b-4817-94f4-186169eb6499-kserve-provision-location\") pod \"isvc-tensorflow-runtime-predictor-7446cd7bb-sc7js\" (UID: \"29ede669-005b-4817-94f4-186169eb6499\") " pod="kserve-ci-e2e-test/isvc-tensorflow-runtime-predictor-7446cd7bb-sc7js" Apr 16 19:20:11.993926 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:20:11.993875 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-tensorflow-runtime-predictor-7446cd7bb-sc7js" Apr 16 19:20:12.107699 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:20:12.107600 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-tensorflow-runtime-predictor-7446cd7bb-sc7js"] Apr 16 19:20:12.110219 ip-10-0-132-14 kubenswrapper[2569]: W0416 19:20:12.110184 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29ede669_005b_4817_94f4_186169eb6499.slice/crio-e696f0b0947e1f53549d061ed63e6196b10d241006220c5cd9be409bba513859 WatchSource:0}: Error finding container e696f0b0947e1f53549d061ed63e6196b10d241006220c5cd9be409bba513859: Status 404 returned error can't find the container with id e696f0b0947e1f53549d061ed63e6196b10d241006220c5cd9be409bba513859 Apr 16 19:20:12.959797 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:20:12.959763 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-tensorflow-runtime-predictor-7446cd7bb-sc7js" event={"ID":"29ede669-005b-4817-94f4-186169eb6499","Type":"ContainerStarted","Data":"5aff14a43145b4c8604ebd0d4b8c15db592de28c49a2a2583d47125017980add"} Apr 16 19:20:12.959797 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:20:12.959797 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-tensorflow-runtime-predictor-7446cd7bb-sc7js" event={"ID":"29ede669-005b-4817-94f4-186169eb6499","Type":"ContainerStarted","Data":"e696f0b0947e1f53549d061ed63e6196b10d241006220c5cd9be409bba513859"} Apr 16 19:20:16.971419 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:20:16.971383 2569 generic.go:358] "Generic (PLEG): container finished" podID="29ede669-005b-4817-94f4-186169eb6499" containerID="5aff14a43145b4c8604ebd0d4b8c15db592de28c49a2a2583d47125017980add" exitCode=0 Apr 16 19:20:16.971885 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:20:16.971442 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-tensorflow-runtime-predictor-7446cd7bb-sc7js" event={"ID":"29ede669-005b-4817-94f4-186169eb6499","Type":"ContainerDied","Data":"5aff14a43145b4c8604ebd0d4b8c15db592de28c49a2a2583d47125017980add"} Apr 16 19:20:17.975476 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:20:17.975441 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-tensorflow-runtime-predictor-7446cd7bb-sc7js" event={"ID":"29ede669-005b-4817-94f4-186169eb6499","Type":"ContainerStarted","Data":"d8bf452dcd49ed7ad97dff688ca038e667ccb478285799e22216bd3deebaece2"} Apr 16 19:20:17.975855 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:20:17.975820 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-tensorflow-runtime-predictor-7446cd7bb-sc7js" Apr 16 19:20:17.977108 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:20:17.977073 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-tensorflow-runtime-predictor-7446cd7bb-sc7js" podUID="29ede669-005b-4817-94f4-186169eb6499" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.33:8080: connect: connection refused" Apr 16 19:20:17.989845 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:20:17.989800 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-tensorflow-runtime-predictor-7446cd7bb-sc7js" podStartSLOduration=6.989784346 podStartE2EDuration="6.989784346s" podCreationTimestamp="2026-04-16 19:20:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 19:20:17.988743167 +0000 UTC m=+3001.966826611" watchObservedRunningTime="2026-04-16 19:20:17.989784346 +0000 UTC m=+3001.967867793" Apr 16 19:20:18.978193 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:20:18.978156 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-tensorflow-runtime-predictor-7446cd7bb-sc7js" podUID="29ede669-005b-4817-94f4-186169eb6499" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.33:8080: connect: connection refused" Apr 16 19:20:28.979129 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:20:28.979094 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-tensorflow-runtime-predictor-7446cd7bb-sc7js" Apr 16 19:20:42.046585 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:20:42.046548 2569 generic.go:358] "Generic (PLEG): container finished" podID="3e5ed365-b060-452e-9573-6abcefbf2135" containerID="2f0c0dece11493fdce2f40423317340335989f5f3c91075572bd9e866e77b1b6" exitCode=137 Apr 16 19:20:42.046956 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:20:42.046606 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-tensorflow-predictor-864f6b7649-8csrr" event={"ID":"3e5ed365-b060-452e-9573-6abcefbf2135","Type":"ContainerDied","Data":"2f0c0dece11493fdce2f40423317340335989f5f3c91075572bd9e866e77b1b6"} Apr 16 19:20:42.276291 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:20:42.276266 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-tensorflow-predictor-864f6b7649-8csrr" Apr 16 19:20:42.451259 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:20:42.451225 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/3e5ed365-b060-452e-9573-6abcefbf2135-kserve-provision-location\") pod \"3e5ed365-b060-452e-9573-6abcefbf2135\" (UID: \"3e5ed365-b060-452e-9573-6abcefbf2135\") " Apr 16 19:20:42.461569 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:20:42.461539 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e5ed365-b060-452e-9573-6abcefbf2135-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "3e5ed365-b060-452e-9573-6abcefbf2135" (UID: "3e5ed365-b060-452e-9573-6abcefbf2135"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 19:20:42.552766 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:20:42.552732 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/3e5ed365-b060-452e-9573-6abcefbf2135-kserve-provision-location\") on node \"ip-10-0-132-14.ec2.internal\" DevicePath \"\"" Apr 16 19:20:42.617814 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:20:42.617779 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-tensorflow-runtime-predictor-7446cd7bb-sc7js"] Apr 16 19:20:42.618160 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:20:42.618087 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-tensorflow-runtime-predictor-7446cd7bb-sc7js" podUID="29ede669-005b-4817-94f4-186169eb6499" containerName="kserve-container" containerID="cri-o://d8bf452dcd49ed7ad97dff688ca038e667ccb478285799e22216bd3deebaece2" gracePeriod=30 Apr 16 19:20:43.050712 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:20:43.050677 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-tensorflow-predictor-864f6b7649-8csrr" event={"ID":"3e5ed365-b060-452e-9573-6abcefbf2135","Type":"ContainerDied","Data":"2452632042bb41fc1920f91c2a97e077db4f8e0d02c5b7295421986a43f95342"} Apr 16 19:20:43.050712 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:20:43.050699 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-tensorflow-predictor-864f6b7649-8csrr" Apr 16 19:20:43.051161 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:20:43.050723 2569 scope.go:117] "RemoveContainer" containerID="2f0c0dece11493fdce2f40423317340335989f5f3c91075572bd9e866e77b1b6" Apr 16 19:20:43.058124 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:20:43.058101 2569 scope.go:117] "RemoveContainer" containerID="f8d9c44eb071c548cf6074cf035c1739c4b23816bcf542f49586cd43ed2a4df6" Apr 16 19:20:43.065610 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:20:43.065591 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-tensorflow-predictor-864f6b7649-8csrr"] Apr 16 19:20:43.068821 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:20:43.068798 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-tensorflow-predictor-864f6b7649-8csrr"] Apr 16 19:20:44.647223 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:20:44.647187 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e5ed365-b060-452e-9573-6abcefbf2135" path="/var/lib/kubelet/pods/3e5ed365-b060-452e-9573-6abcefbf2135/volumes" Apr 16 19:21:13.135803 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:21:13.135656 2569 generic.go:358] "Generic (PLEG): container finished" podID="29ede669-005b-4817-94f4-186169eb6499" containerID="d8bf452dcd49ed7ad97dff688ca038e667ccb478285799e22216bd3deebaece2" exitCode=137 Apr 16 19:21:13.135803 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:21:13.135700 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-tensorflow-runtime-predictor-7446cd7bb-sc7js" event={"ID":"29ede669-005b-4817-94f4-186169eb6499","Type":"ContainerDied","Data":"d8bf452dcd49ed7ad97dff688ca038e667ccb478285799e22216bd3deebaece2"} Apr 16 19:21:13.257818 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:21:13.257792 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-tensorflow-runtime-predictor-7446cd7bb-sc7js" Apr 16 19:21:13.269114 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:21:13.269093 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/29ede669-005b-4817-94f4-186169eb6499-kserve-provision-location\") pod \"29ede669-005b-4817-94f4-186169eb6499\" (UID: \"29ede669-005b-4817-94f4-186169eb6499\") " Apr 16 19:21:13.278908 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:21:13.278872 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29ede669-005b-4817-94f4-186169eb6499-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "29ede669-005b-4817-94f4-186169eb6499" (UID: "29ede669-005b-4817-94f4-186169eb6499"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 19:21:13.370398 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:21:13.370331 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/29ede669-005b-4817-94f4-186169eb6499-kserve-provision-location\") on node \"ip-10-0-132-14.ec2.internal\" DevicePath \"\"" Apr 16 19:21:14.140356 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:21:14.140303 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-tensorflow-runtime-predictor-7446cd7bb-sc7js" event={"ID":"29ede669-005b-4817-94f4-186169eb6499","Type":"ContainerDied","Data":"e696f0b0947e1f53549d061ed63e6196b10d241006220c5cd9be409bba513859"} Apr 16 19:21:14.140356 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:21:14.140353 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-tensorflow-runtime-predictor-7446cd7bb-sc7js" Apr 16 19:21:14.140808 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:21:14.140367 2569 scope.go:117] "RemoveContainer" containerID="d8bf452dcd49ed7ad97dff688ca038e667ccb478285799e22216bd3deebaece2" Apr 16 19:21:14.148627 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:21:14.148529 2569 scope.go:117] "RemoveContainer" containerID="5aff14a43145b4c8604ebd0d4b8c15db592de28c49a2a2583d47125017980add" Apr 16 19:21:14.159986 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:21:14.159963 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-tensorflow-runtime-predictor-7446cd7bb-sc7js"] Apr 16 19:21:14.166121 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:21:14.166102 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-tensorflow-runtime-predictor-7446cd7bb-sc7js"] Apr 16 19:21:14.647795 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:21:14.647754 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29ede669-005b-4817-94f4-186169eb6499" path="/var/lib/kubelet/pods/29ede669-005b-4817-94f4-186169eb6499/volumes" Apr 16 19:22:54.319908 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:22:54.319828 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-predictor-6bd4d9fcc8-gghpc"] Apr 16 19:22:54.320385 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:22:54.320103 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="29ede669-005b-4817-94f4-186169eb6499" containerName="kserve-container" Apr 16 19:22:54.320385 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:22:54.320114 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="29ede669-005b-4817-94f4-186169eb6499" containerName="kserve-container" Apr 16 19:22:54.320385 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:22:54.320126 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3e5ed365-b060-452e-9573-6abcefbf2135" containerName="storage-initializer" Apr 16 19:22:54.320385 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:22:54.320132 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e5ed365-b060-452e-9573-6abcefbf2135" containerName="storage-initializer" Apr 16 19:22:54.320385 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:22:54.320140 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="29ede669-005b-4817-94f4-186169eb6499" containerName="storage-initializer" Apr 16 19:22:54.320385 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:22:54.320146 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="29ede669-005b-4817-94f4-186169eb6499" containerName="storage-initializer" Apr 16 19:22:54.320385 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:22:54.320153 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3e5ed365-b060-452e-9573-6abcefbf2135" containerName="kserve-container" Apr 16 19:22:54.320385 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:22:54.320159 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e5ed365-b060-452e-9573-6abcefbf2135" containerName="kserve-container" Apr 16 19:22:54.320385 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:22:54.320200 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="3e5ed365-b060-452e-9573-6abcefbf2135" containerName="kserve-container" Apr 16 19:22:54.320385 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:22:54.320207 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="29ede669-005b-4817-94f4-186169eb6499" containerName="kserve-container" Apr 16 19:22:54.322990 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:22:54.322974 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-predictor-6bd4d9fcc8-gghpc" Apr 16 19:22:54.324744 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:22:54.324725 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"default-dockercfg-z66mq\"" Apr 16 19:22:54.331762 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:22:54.331736 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-predictor-6bd4d9fcc8-gghpc"] Apr 16 19:22:54.479280 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:22:54.479238 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/710462c7-c287-4fb6-86af-81b1120f16ab-kserve-provision-location\") pod \"isvc-xgboost-predictor-6bd4d9fcc8-gghpc\" (UID: \"710462c7-c287-4fb6-86af-81b1120f16ab\") " pod="kserve-ci-e2e-test/isvc-xgboost-predictor-6bd4d9fcc8-gghpc" Apr 16 19:22:54.580647 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:22:54.580569 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/710462c7-c287-4fb6-86af-81b1120f16ab-kserve-provision-location\") pod \"isvc-xgboost-predictor-6bd4d9fcc8-gghpc\" (UID: \"710462c7-c287-4fb6-86af-81b1120f16ab\") " pod="kserve-ci-e2e-test/isvc-xgboost-predictor-6bd4d9fcc8-gghpc" Apr 16 19:22:54.580953 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:22:54.580933 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/710462c7-c287-4fb6-86af-81b1120f16ab-kserve-provision-location\") pod \"isvc-xgboost-predictor-6bd4d9fcc8-gghpc\" (UID: \"710462c7-c287-4fb6-86af-81b1120f16ab\") " pod="kserve-ci-e2e-test/isvc-xgboost-predictor-6bd4d9fcc8-gghpc" Apr 16 19:22:54.632953 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:22:54.632927 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-predictor-6bd4d9fcc8-gghpc" Apr 16 19:22:54.747848 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:22:54.747814 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-predictor-6bd4d9fcc8-gghpc"] Apr 16 19:22:54.750585 ip-10-0-132-14 kubenswrapper[2569]: W0416 19:22:54.750548 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod710462c7_c287_4fb6_86af_81b1120f16ab.slice/crio-e5cea1ad3012b0f7ced563b2addf43ec178abf0cf0f75769e24e72108dd9dc61 WatchSource:0}: Error finding container e5cea1ad3012b0f7ced563b2addf43ec178abf0cf0f75769e24e72108dd9dc61: Status 404 returned error can't find the container with id e5cea1ad3012b0f7ced563b2addf43ec178abf0cf0f75769e24e72108dd9dc61 Apr 16 19:22:55.416149 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:22:55.416113 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-predictor-6bd4d9fcc8-gghpc" event={"ID":"710462c7-c287-4fb6-86af-81b1120f16ab","Type":"ContainerStarted","Data":"97eae1e11baef4a8f8d1041fac50e0c1df32dab0f7973e3b2b175e54627d3298"} Apr 16 19:22:55.416149 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:22:55.416148 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-predictor-6bd4d9fcc8-gghpc" event={"ID":"710462c7-c287-4fb6-86af-81b1120f16ab","Type":"ContainerStarted","Data":"e5cea1ad3012b0f7ced563b2addf43ec178abf0cf0f75769e24e72108dd9dc61"} Apr 16 19:22:59.433586 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:22:59.433549 2569 generic.go:358] "Generic (PLEG): container finished" podID="710462c7-c287-4fb6-86af-81b1120f16ab" containerID="97eae1e11baef4a8f8d1041fac50e0c1df32dab0f7973e3b2b175e54627d3298" exitCode=0 Apr 16 19:22:59.433965 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:22:59.433622 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-predictor-6bd4d9fcc8-gghpc" event={"ID":"710462c7-c287-4fb6-86af-81b1120f16ab","Type":"ContainerDied","Data":"97eae1e11baef4a8f8d1041fac50e0c1df32dab0f7973e3b2b175e54627d3298"} Apr 16 19:23:19.494974 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:23:19.494935 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-predictor-6bd4d9fcc8-gghpc" event={"ID":"710462c7-c287-4fb6-86af-81b1120f16ab","Type":"ContainerStarted","Data":"9ab914f9941e358c9c9b42729dbd45e384c7cd0d9738f67fa2425b9aea737feb"} Apr 16 19:23:19.495500 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:23:19.495291 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-xgboost-predictor-6bd4d9fcc8-gghpc" Apr 16 19:23:19.496550 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:23:19.496527 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-predictor-6bd4d9fcc8-gghpc" podUID="710462c7-c287-4fb6-86af-81b1120f16ab" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.34:8080: connect: connection refused" Apr 16 19:23:19.509932 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:23:19.509889 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-xgboost-predictor-6bd4d9fcc8-gghpc" podStartSLOduration=5.6600730200000005 podStartE2EDuration="25.50987734s" podCreationTimestamp="2026-04-16 19:22:54 +0000 UTC" firstStartedPulling="2026-04-16 19:22:59.434751232 +0000 UTC m=+3163.412834656" lastFinishedPulling="2026-04-16 19:23:19.284555552 +0000 UTC m=+3183.262638976" observedRunningTime="2026-04-16 19:23:19.508260305 +0000 UTC m=+3183.486343750" watchObservedRunningTime="2026-04-16 19:23:19.50987734 +0000 UTC m=+3183.487960819" Apr 16 19:23:20.497684 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:23:20.497638 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-predictor-6bd4d9fcc8-gghpc" podUID="710462c7-c287-4fb6-86af-81b1120f16ab" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.34:8080: connect: connection refused" Apr 16 19:23:30.498306 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:23:30.498260 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-predictor-6bd4d9fcc8-gghpc" podUID="710462c7-c287-4fb6-86af-81b1120f16ab" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.34:8080: connect: connection refused" Apr 16 19:23:40.498371 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:23:40.498311 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-predictor-6bd4d9fcc8-gghpc" podUID="710462c7-c287-4fb6-86af-81b1120f16ab" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.34:8080: connect: connection refused" Apr 16 19:23:50.497879 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:23:50.497832 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-predictor-6bd4d9fcc8-gghpc" podUID="710462c7-c287-4fb6-86af-81b1120f16ab" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.34:8080: connect: connection refused" Apr 16 19:24:00.497928 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:24:00.497881 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-predictor-6bd4d9fcc8-gghpc" podUID="710462c7-c287-4fb6-86af-81b1120f16ab" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.34:8080: connect: connection refused" Apr 16 19:24:10.497910 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:24:10.497810 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-predictor-6bd4d9fcc8-gghpc" podUID="710462c7-c287-4fb6-86af-81b1120f16ab" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.34:8080: connect: connection refused" Apr 16 19:24:20.498639 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:24:20.498609 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-xgboost-predictor-6bd4d9fcc8-gghpc" Apr 16 19:24:24.487156 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:24:24.487124 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-predictor-6bd4d9fcc8-gghpc"] Apr 16 19:24:24.487539 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:24:24.487403 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-xgboost-predictor-6bd4d9fcc8-gghpc" podUID="710462c7-c287-4fb6-86af-81b1120f16ab" containerName="kserve-container" containerID="cri-o://9ab914f9941e358c9c9b42729dbd45e384c7cd0d9738f67fa2425b9aea737feb" gracePeriod=30 Apr 16 19:24:28.037799 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:24:28.033590 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-predictor-6bd4d9fcc8-gghpc" Apr 16 19:24:28.096426 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:24:28.096387 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/710462c7-c287-4fb6-86af-81b1120f16ab-kserve-provision-location\") pod \"710462c7-c287-4fb6-86af-81b1120f16ab\" (UID: \"710462c7-c287-4fb6-86af-81b1120f16ab\") " Apr 16 19:24:28.096734 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:24:28.096708 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/710462c7-c287-4fb6-86af-81b1120f16ab-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "710462c7-c287-4fb6-86af-81b1120f16ab" (UID: "710462c7-c287-4fb6-86af-81b1120f16ab"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 19:24:28.197850 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:24:28.197811 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/710462c7-c287-4fb6-86af-81b1120f16ab-kserve-provision-location\") on node \"ip-10-0-132-14.ec2.internal\" DevicePath \"\"" Apr 16 19:24:28.680907 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:24:28.680876 2569 generic.go:358] "Generic (PLEG): container finished" podID="710462c7-c287-4fb6-86af-81b1120f16ab" containerID="9ab914f9941e358c9c9b42729dbd45e384c7cd0d9738f67fa2425b9aea737feb" exitCode=0 Apr 16 19:24:28.681069 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:24:28.680948 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-predictor-6bd4d9fcc8-gghpc" event={"ID":"710462c7-c287-4fb6-86af-81b1120f16ab","Type":"ContainerDied","Data":"9ab914f9941e358c9c9b42729dbd45e384c7cd0d9738f67fa2425b9aea737feb"} Apr 16 19:24:28.681069 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:24:28.680969 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-predictor-6bd4d9fcc8-gghpc" Apr 16 19:24:28.681069 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:24:28.680986 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-predictor-6bd4d9fcc8-gghpc" event={"ID":"710462c7-c287-4fb6-86af-81b1120f16ab","Type":"ContainerDied","Data":"e5cea1ad3012b0f7ced563b2addf43ec178abf0cf0f75769e24e72108dd9dc61"} Apr 16 19:24:28.681069 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:24:28.681001 2569 scope.go:117] "RemoveContainer" containerID="9ab914f9941e358c9c9b42729dbd45e384c7cd0d9738f67fa2425b9aea737feb" Apr 16 19:24:28.688713 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:24:28.688694 2569 scope.go:117] "RemoveContainer" containerID="97eae1e11baef4a8f8d1041fac50e0c1df32dab0f7973e3b2b175e54627d3298" Apr 16 19:24:28.695166 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:24:28.695144 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-predictor-6bd4d9fcc8-gghpc"] Apr 16 19:24:28.695762 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:24:28.695741 2569 scope.go:117] "RemoveContainer" containerID="9ab914f9941e358c9c9b42729dbd45e384c7cd0d9738f67fa2425b9aea737feb" Apr 16 19:24:28.696026 ip-10-0-132-14 kubenswrapper[2569]: E0416 19:24:28.696008 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ab914f9941e358c9c9b42729dbd45e384c7cd0d9738f67fa2425b9aea737feb\": container with ID starting with 9ab914f9941e358c9c9b42729dbd45e384c7cd0d9738f67fa2425b9aea737feb not found: ID does not exist" containerID="9ab914f9941e358c9c9b42729dbd45e384c7cd0d9738f67fa2425b9aea737feb" Apr 16 19:24:28.696093 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:24:28.696033 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ab914f9941e358c9c9b42729dbd45e384c7cd0d9738f67fa2425b9aea737feb"} err="failed to get container status \"9ab914f9941e358c9c9b42729dbd45e384c7cd0d9738f67fa2425b9aea737feb\": rpc error: code = NotFound desc = could not find container \"9ab914f9941e358c9c9b42729dbd45e384c7cd0d9738f67fa2425b9aea737feb\": container with ID starting with 9ab914f9941e358c9c9b42729dbd45e384c7cd0d9738f67fa2425b9aea737feb not found: ID does not exist" Apr 16 19:24:28.696093 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:24:28.696053 2569 scope.go:117] "RemoveContainer" containerID="97eae1e11baef4a8f8d1041fac50e0c1df32dab0f7973e3b2b175e54627d3298" Apr 16 19:24:28.696281 ip-10-0-132-14 kubenswrapper[2569]: E0416 19:24:28.696266 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97eae1e11baef4a8f8d1041fac50e0c1df32dab0f7973e3b2b175e54627d3298\": container with ID starting with 97eae1e11baef4a8f8d1041fac50e0c1df32dab0f7973e3b2b175e54627d3298 not found: ID does not exist" containerID="97eae1e11baef4a8f8d1041fac50e0c1df32dab0f7973e3b2b175e54627d3298" Apr 16 19:24:28.696328 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:24:28.696284 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97eae1e11baef4a8f8d1041fac50e0c1df32dab0f7973e3b2b175e54627d3298"} err="failed to get container status \"97eae1e11baef4a8f8d1041fac50e0c1df32dab0f7973e3b2b175e54627d3298\": rpc error: code = NotFound desc = could not find container \"97eae1e11baef4a8f8d1041fac50e0c1df32dab0f7973e3b2b175e54627d3298\": container with ID starting with 97eae1e11baef4a8f8d1041fac50e0c1df32dab0f7973e3b2b175e54627d3298 not found: ID does not exist" Apr 16 19:24:28.699827 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:24:28.699807 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-predictor-6bd4d9fcc8-gghpc"] Apr 16 19:24:30.647785 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:24:30.647750 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="710462c7-c287-4fb6-86af-81b1120f16ab" path="/var/lib/kubelet/pods/710462c7-c287-4fb6-86af-81b1120f16ab/volumes" Apr 16 19:25:44.958738 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:25:44.958665 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-runtime-predictor-5947cc4c99-rvtdp"] Apr 16 19:25:44.959149 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:25:44.958958 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="710462c7-c287-4fb6-86af-81b1120f16ab" containerName="kserve-container" Apr 16 19:25:44.959149 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:25:44.958971 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="710462c7-c287-4fb6-86af-81b1120f16ab" containerName="kserve-container" Apr 16 19:25:44.959149 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:25:44.958982 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="710462c7-c287-4fb6-86af-81b1120f16ab" containerName="storage-initializer" Apr 16 19:25:44.959149 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:25:44.958988 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="710462c7-c287-4fb6-86af-81b1120f16ab" containerName="storage-initializer" Apr 16 19:25:44.959149 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:25:44.959039 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="710462c7-c287-4fb6-86af-81b1120f16ab" containerName="kserve-container" Apr 16 19:25:44.961746 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:25:44.961723 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-runtime-predictor-5947cc4c99-rvtdp" Apr 16 19:25:44.963720 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:25:44.963702 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"default-dockercfg-z66mq\"" Apr 16 19:25:44.968490 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:25:44.968443 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-runtime-predictor-5947cc4c99-rvtdp"] Apr 16 19:25:45.054070 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:25:45.054035 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/bac45875-fbf8-46e1-a10b-c099a3d38a9f-kserve-provision-location\") pod \"isvc-xgboost-runtime-predictor-5947cc4c99-rvtdp\" (UID: \"bac45875-fbf8-46e1-a10b-c099a3d38a9f\") " pod="kserve-ci-e2e-test/isvc-xgboost-runtime-predictor-5947cc4c99-rvtdp" Apr 16 19:25:45.155398 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:25:45.155360 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/bac45875-fbf8-46e1-a10b-c099a3d38a9f-kserve-provision-location\") pod \"isvc-xgboost-runtime-predictor-5947cc4c99-rvtdp\" (UID: \"bac45875-fbf8-46e1-a10b-c099a3d38a9f\") " pod="kserve-ci-e2e-test/isvc-xgboost-runtime-predictor-5947cc4c99-rvtdp" Apr 16 19:25:45.155734 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:25:45.155714 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/bac45875-fbf8-46e1-a10b-c099a3d38a9f-kserve-provision-location\") pod \"isvc-xgboost-runtime-predictor-5947cc4c99-rvtdp\" (UID: \"bac45875-fbf8-46e1-a10b-c099a3d38a9f\") " pod="kserve-ci-e2e-test/isvc-xgboost-runtime-predictor-5947cc4c99-rvtdp" Apr 16 19:25:45.271454 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:25:45.271364 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-runtime-predictor-5947cc4c99-rvtdp" Apr 16 19:25:45.384543 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:25:45.384516 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-runtime-predictor-5947cc4c99-rvtdp"] Apr 16 19:25:45.386823 ip-10-0-132-14 kubenswrapper[2569]: W0416 19:25:45.386779 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbac45875_fbf8_46e1_a10b_c099a3d38a9f.slice/crio-56c54d28d387424dccd5ef961bc43ba50445db04b9c88527d4d508c38d35cc18 WatchSource:0}: Error finding container 56c54d28d387424dccd5ef961bc43ba50445db04b9c88527d4d508c38d35cc18: Status 404 returned error can't find the container with id 56c54d28d387424dccd5ef961bc43ba50445db04b9c88527d4d508c38d35cc18 Apr 16 19:25:45.388647 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:25:45.388631 2569 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 19:25:45.887935 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:25:45.887901 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-runtime-predictor-5947cc4c99-rvtdp" event={"ID":"bac45875-fbf8-46e1-a10b-c099a3d38a9f","Type":"ContainerStarted","Data":"ae545d2334a605f081f99c4c97bdca9cd591907a8efa35fe8c5bcd7bf1e2c588"} Apr 16 19:25:45.887935 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:25:45.887936 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-runtime-predictor-5947cc4c99-rvtdp" event={"ID":"bac45875-fbf8-46e1-a10b-c099a3d38a9f","Type":"ContainerStarted","Data":"56c54d28d387424dccd5ef961bc43ba50445db04b9c88527d4d508c38d35cc18"} Apr 16 19:25:49.902061 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:25:49.902025 2569 generic.go:358] "Generic (PLEG): container finished" podID="bac45875-fbf8-46e1-a10b-c099a3d38a9f" containerID="ae545d2334a605f081f99c4c97bdca9cd591907a8efa35fe8c5bcd7bf1e2c588" exitCode=0 Apr 16 19:25:49.902578 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:25:49.902080 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-runtime-predictor-5947cc4c99-rvtdp" event={"ID":"bac45875-fbf8-46e1-a10b-c099a3d38a9f","Type":"ContainerDied","Data":"ae545d2334a605f081f99c4c97bdca9cd591907a8efa35fe8c5bcd7bf1e2c588"} Apr 16 19:25:50.906096 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:25:50.906057 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-runtime-predictor-5947cc4c99-rvtdp" event={"ID":"bac45875-fbf8-46e1-a10b-c099a3d38a9f","Type":"ContainerStarted","Data":"db5ef0fa6237a2cb222d2b04e35e6a13b4846ee120ff1da21a76f6a106edca7b"} Apr 16 19:25:50.906588 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:25:50.906328 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-xgboost-runtime-predictor-5947cc4c99-rvtdp" Apr 16 19:25:50.907569 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:25:50.907543 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-runtime-predictor-5947cc4c99-rvtdp" podUID="bac45875-fbf8-46e1-a10b-c099a3d38a9f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.35:8080: connect: connection refused" Apr 16 19:25:50.920425 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:25:50.920383 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-xgboost-runtime-predictor-5947cc4c99-rvtdp" podStartSLOduration=6.92036957 podStartE2EDuration="6.92036957s" podCreationTimestamp="2026-04-16 19:25:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 19:25:50.91913368 +0000 UTC m=+3334.897217138" watchObservedRunningTime="2026-04-16 19:25:50.92036957 +0000 UTC m=+3334.898453016" Apr 16 19:25:51.909529 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:25:51.909488 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-runtime-predictor-5947cc4c99-rvtdp" podUID="bac45875-fbf8-46e1-a10b-c099a3d38a9f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.35:8080: connect: connection refused" Apr 16 19:26:01.910330 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:26:01.910285 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-runtime-predictor-5947cc4c99-rvtdp" podUID="bac45875-fbf8-46e1-a10b-c099a3d38a9f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.35:8080: connect: connection refused" Apr 16 19:26:11.909928 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:26:11.909887 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-runtime-predictor-5947cc4c99-rvtdp" podUID="bac45875-fbf8-46e1-a10b-c099a3d38a9f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.35:8080: connect: connection refused" Apr 16 19:26:21.909822 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:26:21.909777 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-runtime-predictor-5947cc4c99-rvtdp" podUID="bac45875-fbf8-46e1-a10b-c099a3d38a9f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.35:8080: connect: connection refused" Apr 16 19:26:31.909553 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:26:31.909504 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-runtime-predictor-5947cc4c99-rvtdp" podUID="bac45875-fbf8-46e1-a10b-c099a3d38a9f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.35:8080: connect: connection refused" Apr 16 19:26:41.910115 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:26:41.910068 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-runtime-predictor-5947cc4c99-rvtdp" podUID="bac45875-fbf8-46e1-a10b-c099a3d38a9f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.35:8080: connect: connection refused" Apr 16 19:26:51.911323 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:26:51.911284 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-xgboost-runtime-predictor-5947cc4c99-rvtdp" Apr 16 19:26:55.088991 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:26:55.088952 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-runtime-predictor-5947cc4c99-rvtdp"] Apr 16 19:26:55.089471 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:26:55.089216 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-xgboost-runtime-predictor-5947cc4c99-rvtdp" podUID="bac45875-fbf8-46e1-a10b-c099a3d38a9f" containerName="kserve-container" containerID="cri-o://db5ef0fa6237a2cb222d2b04e35e6a13b4846ee120ff1da21a76f6a106edca7b" gracePeriod=30 Apr 16 19:26:58.623064 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:26:58.623040 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-runtime-predictor-5947cc4c99-rvtdp" Apr 16 19:26:58.795580 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:26:58.795490 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/bac45875-fbf8-46e1-a10b-c099a3d38a9f-kserve-provision-location\") pod \"bac45875-fbf8-46e1-a10b-c099a3d38a9f\" (UID: \"bac45875-fbf8-46e1-a10b-c099a3d38a9f\") " Apr 16 19:26:58.795841 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:26:58.795820 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bac45875-fbf8-46e1-a10b-c099a3d38a9f-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "bac45875-fbf8-46e1-a10b-c099a3d38a9f" (UID: "bac45875-fbf8-46e1-a10b-c099a3d38a9f"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 19:26:58.896257 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:26:58.896219 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/bac45875-fbf8-46e1-a10b-c099a3d38a9f-kserve-provision-location\") on node \"ip-10-0-132-14.ec2.internal\" DevicePath \"\"" Apr 16 19:26:59.095353 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:26:59.095248 2569 generic.go:358] "Generic (PLEG): container finished" podID="bac45875-fbf8-46e1-a10b-c099a3d38a9f" containerID="db5ef0fa6237a2cb222d2b04e35e6a13b4846ee120ff1da21a76f6a106edca7b" exitCode=0 Apr 16 19:26:59.095353 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:26:59.095297 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-runtime-predictor-5947cc4c99-rvtdp" event={"ID":"bac45875-fbf8-46e1-a10b-c099a3d38a9f","Type":"ContainerDied","Data":"db5ef0fa6237a2cb222d2b04e35e6a13b4846ee120ff1da21a76f6a106edca7b"} Apr 16 19:26:59.095353 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:26:59.095319 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-runtime-predictor-5947cc4c99-rvtdp" event={"ID":"bac45875-fbf8-46e1-a10b-c099a3d38a9f","Type":"ContainerDied","Data":"56c54d28d387424dccd5ef961bc43ba50445db04b9c88527d4d508c38d35cc18"} Apr 16 19:26:59.095619 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:26:59.095359 2569 scope.go:117] "RemoveContainer" containerID="db5ef0fa6237a2cb222d2b04e35e6a13b4846ee120ff1da21a76f6a106edca7b" Apr 16 19:26:59.095619 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:26:59.095323 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-runtime-predictor-5947cc4c99-rvtdp" Apr 16 19:26:59.103740 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:26:59.103659 2569 scope.go:117] "RemoveContainer" containerID="ae545d2334a605f081f99c4c97bdca9cd591907a8efa35fe8c5bcd7bf1e2c588" Apr 16 19:26:59.110512 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:26:59.110495 2569 scope.go:117] "RemoveContainer" containerID="db5ef0fa6237a2cb222d2b04e35e6a13b4846ee120ff1da21a76f6a106edca7b" Apr 16 19:26:59.110761 ip-10-0-132-14 kubenswrapper[2569]: E0416 19:26:59.110743 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db5ef0fa6237a2cb222d2b04e35e6a13b4846ee120ff1da21a76f6a106edca7b\": container with ID starting with db5ef0fa6237a2cb222d2b04e35e6a13b4846ee120ff1da21a76f6a106edca7b not found: ID does not exist" containerID="db5ef0fa6237a2cb222d2b04e35e6a13b4846ee120ff1da21a76f6a106edca7b" Apr 16 19:26:59.110800 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:26:59.110771 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db5ef0fa6237a2cb222d2b04e35e6a13b4846ee120ff1da21a76f6a106edca7b"} err="failed to get container status \"db5ef0fa6237a2cb222d2b04e35e6a13b4846ee120ff1da21a76f6a106edca7b\": rpc error: code = NotFound desc = could not find container \"db5ef0fa6237a2cb222d2b04e35e6a13b4846ee120ff1da21a76f6a106edca7b\": container with ID starting with db5ef0fa6237a2cb222d2b04e35e6a13b4846ee120ff1da21a76f6a106edca7b not found: ID does not exist" Apr 16 19:26:59.110800 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:26:59.110788 2569 scope.go:117] "RemoveContainer" containerID="ae545d2334a605f081f99c4c97bdca9cd591907a8efa35fe8c5bcd7bf1e2c588" Apr 16 19:26:59.111037 ip-10-0-132-14 kubenswrapper[2569]: E0416 19:26:59.111020 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae545d2334a605f081f99c4c97bdca9cd591907a8efa35fe8c5bcd7bf1e2c588\": container with ID starting with ae545d2334a605f081f99c4c97bdca9cd591907a8efa35fe8c5bcd7bf1e2c588 not found: ID does not exist" containerID="ae545d2334a605f081f99c4c97bdca9cd591907a8efa35fe8c5bcd7bf1e2c588" Apr 16 19:26:59.111075 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:26:59.111045 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae545d2334a605f081f99c4c97bdca9cd591907a8efa35fe8c5bcd7bf1e2c588"} err="failed to get container status \"ae545d2334a605f081f99c4c97bdca9cd591907a8efa35fe8c5bcd7bf1e2c588\": rpc error: code = NotFound desc = could not find container \"ae545d2334a605f081f99c4c97bdca9cd591907a8efa35fe8c5bcd7bf1e2c588\": container with ID starting with ae545d2334a605f081f99c4c97bdca9cd591907a8efa35fe8c5bcd7bf1e2c588 not found: ID does not exist" Apr 16 19:26:59.118492 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:26:59.116566 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-runtime-predictor-5947cc4c99-rvtdp"] Apr 16 19:26:59.122530 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:26:59.122509 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-runtime-predictor-5947cc4c99-rvtdp"] Apr 16 19:27:00.647352 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:27:00.647307 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bac45875-fbf8-46e1-a10b-c099a3d38a9f" path="/var/lib/kubelet/pods/bac45875-fbf8-46e1-a10b-c099a3d38a9f/volumes" Apr 16 19:27:45.312583 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:27:45.312546 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-v2-predictor-675d9b5ff-97phw"] Apr 16 19:27:45.312982 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:27:45.312805 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bac45875-fbf8-46e1-a10b-c099a3d38a9f" containerName="kserve-container" Apr 16 19:27:45.312982 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:27:45.312815 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="bac45875-fbf8-46e1-a10b-c099a3d38a9f" containerName="kserve-container" Apr 16 19:27:45.312982 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:27:45.312827 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bac45875-fbf8-46e1-a10b-c099a3d38a9f" containerName="storage-initializer" Apr 16 19:27:45.312982 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:27:45.312833 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="bac45875-fbf8-46e1-a10b-c099a3d38a9f" containerName="storage-initializer" Apr 16 19:27:45.312982 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:27:45.312876 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="bac45875-fbf8-46e1-a10b-c099a3d38a9f" containerName="kserve-container" Apr 16 19:27:45.315663 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:27:45.315647 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-v2-predictor-675d9b5ff-97phw" Apr 16 19:27:45.317451 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:27:45.317431 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"default-dockercfg-z66mq\"" Apr 16 19:27:45.325110 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:27:45.325087 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-v2-predictor-675d9b5ff-97phw"] Apr 16 19:27:45.328627 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:27:45.328609 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/aa880a9e-211b-49d5-80f4-2b9cf9fd8554-kserve-provision-location\") pod \"isvc-xgboost-v2-predictor-675d9b5ff-97phw\" (UID: \"aa880a9e-211b-49d5-80f4-2b9cf9fd8554\") " pod="kserve-ci-e2e-test/isvc-xgboost-v2-predictor-675d9b5ff-97phw" Apr 16 19:27:45.429368 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:27:45.429322 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/aa880a9e-211b-49d5-80f4-2b9cf9fd8554-kserve-provision-location\") pod \"isvc-xgboost-v2-predictor-675d9b5ff-97phw\" (UID: \"aa880a9e-211b-49d5-80f4-2b9cf9fd8554\") " pod="kserve-ci-e2e-test/isvc-xgboost-v2-predictor-675d9b5ff-97phw" Apr 16 19:27:45.429708 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:27:45.429687 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/aa880a9e-211b-49d5-80f4-2b9cf9fd8554-kserve-provision-location\") pod \"isvc-xgboost-v2-predictor-675d9b5ff-97phw\" (UID: \"aa880a9e-211b-49d5-80f4-2b9cf9fd8554\") " pod="kserve-ci-e2e-test/isvc-xgboost-v2-predictor-675d9b5ff-97phw" Apr 16 19:27:45.626721 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:27:45.626633 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-v2-predictor-675d9b5ff-97phw" Apr 16 19:27:45.739054 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:27:45.739015 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-v2-predictor-675d9b5ff-97phw"] Apr 16 19:27:45.742710 ip-10-0-132-14 kubenswrapper[2569]: W0416 19:27:45.742679 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa880a9e_211b_49d5_80f4_2b9cf9fd8554.slice/crio-70917ea41932148c35be2bbd68f0b20331b66b71be8bc3280ba40b8aeef66afa WatchSource:0}: Error finding container 70917ea41932148c35be2bbd68f0b20331b66b71be8bc3280ba40b8aeef66afa: Status 404 returned error can't find the container with id 70917ea41932148c35be2bbd68f0b20331b66b71be8bc3280ba40b8aeef66afa Apr 16 19:27:46.226100 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:27:46.226061 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-v2-predictor-675d9b5ff-97phw" event={"ID":"aa880a9e-211b-49d5-80f4-2b9cf9fd8554","Type":"ContainerStarted","Data":"f911a416092848f11a52f032f36f3ab5bbddc01f11fbd2776888ecaaf3de5636"} Apr 16 19:27:46.226279 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:27:46.226106 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-v2-predictor-675d9b5ff-97phw" event={"ID":"aa880a9e-211b-49d5-80f4-2b9cf9fd8554","Type":"ContainerStarted","Data":"70917ea41932148c35be2bbd68f0b20331b66b71be8bc3280ba40b8aeef66afa"} Apr 16 19:27:50.237307 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:27:50.237270 2569 generic.go:358] "Generic (PLEG): container finished" podID="aa880a9e-211b-49d5-80f4-2b9cf9fd8554" containerID="f911a416092848f11a52f032f36f3ab5bbddc01f11fbd2776888ecaaf3de5636" exitCode=0 Apr 16 19:27:50.237689 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:27:50.237358 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-v2-predictor-675d9b5ff-97phw" event={"ID":"aa880a9e-211b-49d5-80f4-2b9cf9fd8554","Type":"ContainerDied","Data":"f911a416092848f11a52f032f36f3ab5bbddc01f11fbd2776888ecaaf3de5636"} Apr 16 19:27:51.241516 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:27:51.241477 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-v2-predictor-675d9b5ff-97phw" event={"ID":"aa880a9e-211b-49d5-80f4-2b9cf9fd8554","Type":"ContainerStarted","Data":"1982e364a1c1094964ce9fe9669aa269ad25085eb0643eff1011eb43b66cf195"} Apr 16 19:27:51.241955 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:27:51.241828 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-xgboost-v2-predictor-675d9b5ff-97phw" Apr 16 19:27:51.243105 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:27:51.243076 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-v2-predictor-675d9b5ff-97phw" podUID="aa880a9e-211b-49d5-80f4-2b9cf9fd8554" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.36:8080: connect: connection refused" Apr 16 19:27:51.255652 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:27:51.255508 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-xgboost-v2-predictor-675d9b5ff-97phw" podStartSLOduration=6.255493096 podStartE2EDuration="6.255493096s" podCreationTimestamp="2026-04-16 19:27:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 19:27:51.254697256 +0000 UTC m=+3455.232780736" watchObservedRunningTime="2026-04-16 19:27:51.255493096 +0000 UTC m=+3455.233576543" Apr 16 19:27:52.244392 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:27:52.244351 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-v2-predictor-675d9b5ff-97phw" podUID="aa880a9e-211b-49d5-80f4-2b9cf9fd8554" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.36:8080: connect: connection refused" Apr 16 19:28:02.245190 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:28:02.245144 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-v2-predictor-675d9b5ff-97phw" podUID="aa880a9e-211b-49d5-80f4-2b9cf9fd8554" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.36:8080: connect: connection refused" Apr 16 19:28:12.244954 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:28:12.244910 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-v2-predictor-675d9b5ff-97phw" podUID="aa880a9e-211b-49d5-80f4-2b9cf9fd8554" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.36:8080: connect: connection refused" Apr 16 19:28:22.244427 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:28:22.244372 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-v2-predictor-675d9b5ff-97phw" podUID="aa880a9e-211b-49d5-80f4-2b9cf9fd8554" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.36:8080: connect: connection refused" Apr 16 19:28:32.244562 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:28:32.244512 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-v2-predictor-675d9b5ff-97phw" podUID="aa880a9e-211b-49d5-80f4-2b9cf9fd8554" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.36:8080: connect: connection refused" Apr 16 19:28:42.244718 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:28:42.244635 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-v2-predictor-675d9b5ff-97phw" podUID="aa880a9e-211b-49d5-80f4-2b9cf9fd8554" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.36:8080: connect: connection refused" Apr 16 19:28:52.245051 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:28:52.245014 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-xgboost-v2-predictor-675d9b5ff-97phw" Apr 16 19:28:55.442063 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:28:55.442031 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-v2-predictor-675d9b5ff-97phw"] Apr 16 19:28:55.442509 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:28:55.442284 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-xgboost-v2-predictor-675d9b5ff-97phw" podUID="aa880a9e-211b-49d5-80f4-2b9cf9fd8554" containerName="kserve-container" containerID="cri-o://1982e364a1c1094964ce9fe9669aa269ad25085eb0643eff1011eb43b66cf195" gracePeriod=30 Apr 16 19:28:59.090376 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:28:59.090353 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-v2-predictor-675d9b5ff-97phw" Apr 16 19:28:59.275243 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:28:59.275207 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/aa880a9e-211b-49d5-80f4-2b9cf9fd8554-kserve-provision-location\") pod \"aa880a9e-211b-49d5-80f4-2b9cf9fd8554\" (UID: \"aa880a9e-211b-49d5-80f4-2b9cf9fd8554\") " Apr 16 19:28:59.275589 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:28:59.275567 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa880a9e-211b-49d5-80f4-2b9cf9fd8554-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "aa880a9e-211b-49d5-80f4-2b9cf9fd8554" (UID: "aa880a9e-211b-49d5-80f4-2b9cf9fd8554"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 19:28:59.375722 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:28:59.375687 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/aa880a9e-211b-49d5-80f4-2b9cf9fd8554-kserve-provision-location\") on node \"ip-10-0-132-14.ec2.internal\" DevicePath \"\"" Apr 16 19:28:59.431248 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:28:59.431212 2569 generic.go:358] "Generic (PLEG): container finished" podID="aa880a9e-211b-49d5-80f4-2b9cf9fd8554" containerID="1982e364a1c1094964ce9fe9669aa269ad25085eb0643eff1011eb43b66cf195" exitCode=0 Apr 16 19:28:59.431452 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:28:59.431278 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-v2-predictor-675d9b5ff-97phw" event={"ID":"aa880a9e-211b-49d5-80f4-2b9cf9fd8554","Type":"ContainerDied","Data":"1982e364a1c1094964ce9fe9669aa269ad25085eb0643eff1011eb43b66cf195"} Apr 16 19:28:59.431452 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:28:59.431286 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-v2-predictor-675d9b5ff-97phw" Apr 16 19:28:59.431452 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:28:59.431306 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-v2-predictor-675d9b5ff-97phw" event={"ID":"aa880a9e-211b-49d5-80f4-2b9cf9fd8554","Type":"ContainerDied","Data":"70917ea41932148c35be2bbd68f0b20331b66b71be8bc3280ba40b8aeef66afa"} Apr 16 19:28:59.431452 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:28:59.431321 2569 scope.go:117] "RemoveContainer" containerID="1982e364a1c1094964ce9fe9669aa269ad25085eb0643eff1011eb43b66cf195" Apr 16 19:28:59.439173 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:28:59.439156 2569 scope.go:117] "RemoveContainer" containerID="f911a416092848f11a52f032f36f3ab5bbddc01f11fbd2776888ecaaf3de5636" Apr 16 19:28:59.445906 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:28:59.445886 2569 scope.go:117] "RemoveContainer" containerID="1982e364a1c1094964ce9fe9669aa269ad25085eb0643eff1011eb43b66cf195" Apr 16 19:28:59.446131 ip-10-0-132-14 kubenswrapper[2569]: E0416 19:28:59.446113 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1982e364a1c1094964ce9fe9669aa269ad25085eb0643eff1011eb43b66cf195\": container with ID starting with 1982e364a1c1094964ce9fe9669aa269ad25085eb0643eff1011eb43b66cf195 not found: ID does not exist" containerID="1982e364a1c1094964ce9fe9669aa269ad25085eb0643eff1011eb43b66cf195" Apr 16 19:28:59.446176 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:28:59.446140 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1982e364a1c1094964ce9fe9669aa269ad25085eb0643eff1011eb43b66cf195"} err="failed to get container status \"1982e364a1c1094964ce9fe9669aa269ad25085eb0643eff1011eb43b66cf195\": rpc error: code = NotFound desc = could not find container \"1982e364a1c1094964ce9fe9669aa269ad25085eb0643eff1011eb43b66cf195\": container with ID starting with 1982e364a1c1094964ce9fe9669aa269ad25085eb0643eff1011eb43b66cf195 not found: ID does not exist" Apr 16 19:28:59.446176 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:28:59.446155 2569 scope.go:117] "RemoveContainer" containerID="f911a416092848f11a52f032f36f3ab5bbddc01f11fbd2776888ecaaf3de5636" Apr 16 19:28:59.446380 ip-10-0-132-14 kubenswrapper[2569]: E0416 19:28:59.446362 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f911a416092848f11a52f032f36f3ab5bbddc01f11fbd2776888ecaaf3de5636\": container with ID starting with f911a416092848f11a52f032f36f3ab5bbddc01f11fbd2776888ecaaf3de5636 not found: ID does not exist" containerID="f911a416092848f11a52f032f36f3ab5bbddc01f11fbd2776888ecaaf3de5636" Apr 16 19:28:59.446423 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:28:59.446384 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f911a416092848f11a52f032f36f3ab5bbddc01f11fbd2776888ecaaf3de5636"} err="failed to get container status \"f911a416092848f11a52f032f36f3ab5bbddc01f11fbd2776888ecaaf3de5636\": rpc error: code = NotFound desc = could not find container \"f911a416092848f11a52f032f36f3ab5bbddc01f11fbd2776888ecaaf3de5636\": container with ID starting with f911a416092848f11a52f032f36f3ab5bbddc01f11fbd2776888ecaaf3de5636 not found: ID does not exist" Apr 16 19:28:59.449790 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:28:59.449766 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-v2-predictor-675d9b5ff-97phw"] Apr 16 19:28:59.454194 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:28:59.454160 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-v2-predictor-675d9b5ff-97phw"] Apr 16 19:29:00.647785 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:29:00.647752 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa880a9e-211b-49d5-80f4-2b9cf9fd8554" path="/var/lib/kubelet/pods/aa880a9e-211b-49d5-80f4-2b9cf9fd8554/volumes" Apr 16 19:34:53.368299 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:34:53.368265 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-h97z4/must-gather-z4hz8"] Apr 16 19:34:53.368715 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:34:53.368530 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aa880a9e-211b-49d5-80f4-2b9cf9fd8554" containerName="kserve-container" Apr 16 19:34:53.368715 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:34:53.368542 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa880a9e-211b-49d5-80f4-2b9cf9fd8554" containerName="kserve-container" Apr 16 19:34:53.368715 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:34:53.368556 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aa880a9e-211b-49d5-80f4-2b9cf9fd8554" containerName="storage-initializer" Apr 16 19:34:53.368715 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:34:53.368563 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa880a9e-211b-49d5-80f4-2b9cf9fd8554" containerName="storage-initializer" Apr 16 19:34:53.368715 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:34:53.368602 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="aa880a9e-211b-49d5-80f4-2b9cf9fd8554" containerName="kserve-container" Apr 16 19:34:53.371413 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:34:53.371394 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h97z4/must-gather-z4hz8" Apr 16 19:34:53.373277 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:34:53.373253 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-h97z4\"/\"default-dockercfg-tv8bn\"" Apr 16 19:34:53.373415 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:34:53.373315 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-h97z4\"/\"kube-root-ca.crt\"" Apr 16 19:34:53.373415 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:34:53.373395 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-h97z4\"/\"openshift-service-ca.crt\"" Apr 16 19:34:53.379053 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:34:53.379022 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-h97z4/must-gather-z4hz8"] Apr 16 19:34:53.451052 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:34:53.451017 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndc66\" (UniqueName: \"kubernetes.io/projected/113f656e-3dd7-43e0-8d9c-ade3142ccc1d-kube-api-access-ndc66\") pod \"must-gather-z4hz8\" (UID: \"113f656e-3dd7-43e0-8d9c-ade3142ccc1d\") " pod="openshift-must-gather-h97z4/must-gather-z4hz8" Apr 16 19:34:53.451233 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:34:53.451081 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/113f656e-3dd7-43e0-8d9c-ade3142ccc1d-must-gather-output\") pod \"must-gather-z4hz8\" (UID: \"113f656e-3dd7-43e0-8d9c-ade3142ccc1d\") " pod="openshift-must-gather-h97z4/must-gather-z4hz8" Apr 16 19:34:53.552261 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:34:53.552210 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ndc66\" (UniqueName: \"kubernetes.io/projected/113f656e-3dd7-43e0-8d9c-ade3142ccc1d-kube-api-access-ndc66\") pod \"must-gather-z4hz8\" (UID: \"113f656e-3dd7-43e0-8d9c-ade3142ccc1d\") " pod="openshift-must-gather-h97z4/must-gather-z4hz8" Apr 16 19:34:53.552459 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:34:53.552282 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/113f656e-3dd7-43e0-8d9c-ade3142ccc1d-must-gather-output\") pod \"must-gather-z4hz8\" (UID: \"113f656e-3dd7-43e0-8d9c-ade3142ccc1d\") " pod="openshift-must-gather-h97z4/must-gather-z4hz8" Apr 16 19:34:53.552591 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:34:53.552577 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/113f656e-3dd7-43e0-8d9c-ade3142ccc1d-must-gather-output\") pod \"must-gather-z4hz8\" (UID: \"113f656e-3dd7-43e0-8d9c-ade3142ccc1d\") " pod="openshift-must-gather-h97z4/must-gather-z4hz8" Apr 16 19:34:53.559256 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:34:53.559235 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndc66\" (UniqueName: \"kubernetes.io/projected/113f656e-3dd7-43e0-8d9c-ade3142ccc1d-kube-api-access-ndc66\") pod \"must-gather-z4hz8\" (UID: \"113f656e-3dd7-43e0-8d9c-ade3142ccc1d\") " pod="openshift-must-gather-h97z4/must-gather-z4hz8" Apr 16 19:34:53.680792 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:34:53.680712 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h97z4/must-gather-z4hz8" Apr 16 19:34:53.794827 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:34:53.794799 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-h97z4/must-gather-z4hz8"] Apr 16 19:34:53.797838 ip-10-0-132-14 kubenswrapper[2569]: W0416 19:34:53.797809 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod113f656e_3dd7_43e0_8d9c_ade3142ccc1d.slice/crio-a670ed31988302ed9e705c9f206ed91829aec6cf957afb80757ba4a439a3f052 WatchSource:0}: Error finding container a670ed31988302ed9e705c9f206ed91829aec6cf957afb80757ba4a439a3f052: Status 404 returned error can't find the container with id a670ed31988302ed9e705c9f206ed91829aec6cf957afb80757ba4a439a3f052 Apr 16 19:34:53.799790 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:34:53.799771 2569 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 19:34:54.372845 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:34:54.372809 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-h97z4/must-gather-z4hz8" event={"ID":"113f656e-3dd7-43e0-8d9c-ade3142ccc1d","Type":"ContainerStarted","Data":"a670ed31988302ed9e705c9f206ed91829aec6cf957afb80757ba4a439a3f052"} Apr 16 19:34:59.393285 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:34:59.393244 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-h97z4/must-gather-z4hz8" event={"ID":"113f656e-3dd7-43e0-8d9c-ade3142ccc1d","Type":"ContainerStarted","Data":"231c320e82416dac2cadb97f2f0d722a5cccef7eb5594d32f99e8e1779041d0c"} Apr 16 19:34:59.393285 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:34:59.393281 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-h97z4/must-gather-z4hz8" event={"ID":"113f656e-3dd7-43e0-8d9c-ade3142ccc1d","Type":"ContainerStarted","Data":"82b0475d4c2b2f3bc42e2561252455e85208eaa0f8f1cb8a073eb3748624067e"} Apr 16 19:34:59.407177 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:34:59.407124 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-h97z4/must-gather-z4hz8" podStartSLOduration=1.837962806 podStartE2EDuration="6.407106178s" podCreationTimestamp="2026-04-16 19:34:53 +0000 UTC" firstStartedPulling="2026-04-16 19:34:53.799903301 +0000 UTC m=+3877.777986725" lastFinishedPulling="2026-04-16 19:34:58.369046662 +0000 UTC m=+3882.347130097" observedRunningTime="2026-04-16 19:34:59.406726692 +0000 UTC m=+3883.384810141" watchObservedRunningTime="2026-04-16 19:34:59.407106178 +0000 UTC m=+3883.385189627" Apr 16 19:35:18.455277 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:18.455243 2569 generic.go:358] "Generic (PLEG): container finished" podID="113f656e-3dd7-43e0-8d9c-ade3142ccc1d" containerID="82b0475d4c2b2f3bc42e2561252455e85208eaa0f8f1cb8a073eb3748624067e" exitCode=0 Apr 16 19:35:18.455697 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:18.455281 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-h97z4/must-gather-z4hz8" event={"ID":"113f656e-3dd7-43e0-8d9c-ade3142ccc1d","Type":"ContainerDied","Data":"82b0475d4c2b2f3bc42e2561252455e85208eaa0f8f1cb8a073eb3748624067e"} Apr 16 19:35:18.455697 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:18.455606 2569 scope.go:117] "RemoveContainer" containerID="82b0475d4c2b2f3bc42e2561252455e85208eaa0f8f1cb8a073eb3748624067e" Apr 16 19:35:19.025809 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:19.025775 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-h97z4_must-gather-z4hz8_113f656e-3dd7-43e0-8d9c-ade3142ccc1d/gather/0.log" Apr 16 19:35:22.190885 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:22.190848 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_global-pull-secret-syncer-66tjb_fd1a63ff-830c-4979-9f9d-bd6268584fbf/global-pull-secret-syncer/0.log" Apr 16 19:35:22.442824 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:22.442749 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_konnectivity-agent-nx8s6_a72d0e51-8bc7-48d1-b552-f8f4b4a532f9/konnectivity-agent/0.log" Apr 16 19:35:22.464854 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:22.464831 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_kube-apiserver-proxy-ip-10-0-132-14.ec2.internal_5c9e91c38fa43bfd6e69aef3cdcafb41/haproxy/0.log" Apr 16 19:35:24.494790 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:24.494753 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-h97z4/must-gather-z4hz8"] Apr 16 19:35:24.495223 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:24.494966 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-must-gather-h97z4/must-gather-z4hz8" podUID="113f656e-3dd7-43e0-8d9c-ade3142ccc1d" containerName="copy" containerID="cri-o://231c320e82416dac2cadb97f2f0d722a5cccef7eb5594d32f99e8e1779041d0c" gracePeriod=2 Apr 16 19:35:24.498374 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:24.498331 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-h97z4/must-gather-z4hz8"] Apr 16 19:35:24.716726 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:24.716705 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-h97z4_must-gather-z4hz8_113f656e-3dd7-43e0-8d9c-ade3142ccc1d/copy/0.log" Apr 16 19:35:24.717071 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:24.717055 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h97z4/must-gather-z4hz8" Apr 16 19:35:24.806812 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:24.806735 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndc66\" (UniqueName: \"kubernetes.io/projected/113f656e-3dd7-43e0-8d9c-ade3142ccc1d-kube-api-access-ndc66\") pod \"113f656e-3dd7-43e0-8d9c-ade3142ccc1d\" (UID: \"113f656e-3dd7-43e0-8d9c-ade3142ccc1d\") " Apr 16 19:35:24.806812 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:24.806776 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/113f656e-3dd7-43e0-8d9c-ade3142ccc1d-must-gather-output\") pod \"113f656e-3dd7-43e0-8d9c-ade3142ccc1d\" (UID: \"113f656e-3dd7-43e0-8d9c-ade3142ccc1d\") " Apr 16 19:35:24.808110 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:24.808076 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/113f656e-3dd7-43e0-8d9c-ade3142ccc1d-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "113f656e-3dd7-43e0-8d9c-ade3142ccc1d" (UID: "113f656e-3dd7-43e0-8d9c-ade3142ccc1d"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 19:35:24.808943 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:24.808916 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/113f656e-3dd7-43e0-8d9c-ade3142ccc1d-kube-api-access-ndc66" (OuterVolumeSpecName: "kube-api-access-ndc66") pod "113f656e-3dd7-43e0-8d9c-ade3142ccc1d" (UID: "113f656e-3dd7-43e0-8d9c-ade3142ccc1d"). InnerVolumeSpecName "kube-api-access-ndc66". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 19:35:24.908130 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:24.908093 2569 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ndc66\" (UniqueName: \"kubernetes.io/projected/113f656e-3dd7-43e0-8d9c-ade3142ccc1d-kube-api-access-ndc66\") on node \"ip-10-0-132-14.ec2.internal\" DevicePath \"\"" Apr 16 19:35:24.908130 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:24.908123 2569 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/113f656e-3dd7-43e0-8d9c-ade3142ccc1d-must-gather-output\") on node \"ip-10-0-132-14.ec2.internal\" DevicePath \"\"" Apr 16 19:35:25.475634 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:25.475607 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-h97z4_must-gather-z4hz8_113f656e-3dd7-43e0-8d9c-ade3142ccc1d/copy/0.log" Apr 16 19:35:25.475926 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:25.475898 2569 generic.go:358] "Generic (PLEG): container finished" podID="113f656e-3dd7-43e0-8d9c-ade3142ccc1d" containerID="231c320e82416dac2cadb97f2f0d722a5cccef7eb5594d32f99e8e1779041d0c" exitCode=143 Apr 16 19:35:25.475999 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:25.475953 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h97z4/must-gather-z4hz8" Apr 16 19:35:25.475999 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:25.475971 2569 scope.go:117] "RemoveContainer" containerID="231c320e82416dac2cadb97f2f0d722a5cccef7eb5594d32f99e8e1779041d0c" Apr 16 19:35:25.483101 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:25.483076 2569 scope.go:117] "RemoveContainer" containerID="82b0475d4c2b2f3bc42e2561252455e85208eaa0f8f1cb8a073eb3748624067e" Apr 16 19:35:25.495004 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:25.494985 2569 scope.go:117] "RemoveContainer" containerID="231c320e82416dac2cadb97f2f0d722a5cccef7eb5594d32f99e8e1779041d0c" Apr 16 19:35:25.495280 ip-10-0-132-14 kubenswrapper[2569]: E0416 19:35:25.495217 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"231c320e82416dac2cadb97f2f0d722a5cccef7eb5594d32f99e8e1779041d0c\": container with ID starting with 231c320e82416dac2cadb97f2f0d722a5cccef7eb5594d32f99e8e1779041d0c not found: ID does not exist" containerID="231c320e82416dac2cadb97f2f0d722a5cccef7eb5594d32f99e8e1779041d0c" Apr 16 19:35:25.495280 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:25.495244 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"231c320e82416dac2cadb97f2f0d722a5cccef7eb5594d32f99e8e1779041d0c"} err="failed to get container status \"231c320e82416dac2cadb97f2f0d722a5cccef7eb5594d32f99e8e1779041d0c\": rpc error: code = NotFound desc = could not find container \"231c320e82416dac2cadb97f2f0d722a5cccef7eb5594d32f99e8e1779041d0c\": container with ID starting with 231c320e82416dac2cadb97f2f0d722a5cccef7eb5594d32f99e8e1779041d0c not found: ID does not exist" Apr 16 19:35:25.495280 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:25.495262 2569 scope.go:117] "RemoveContainer" containerID="82b0475d4c2b2f3bc42e2561252455e85208eaa0f8f1cb8a073eb3748624067e" Apr 16 19:35:25.495517 ip-10-0-132-14 kubenswrapper[2569]: E0416 19:35:25.495496 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82b0475d4c2b2f3bc42e2561252455e85208eaa0f8f1cb8a073eb3748624067e\": container with ID starting with 82b0475d4c2b2f3bc42e2561252455e85208eaa0f8f1cb8a073eb3748624067e not found: ID does not exist" containerID="82b0475d4c2b2f3bc42e2561252455e85208eaa0f8f1cb8a073eb3748624067e" Apr 16 19:35:25.495571 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:25.495526 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82b0475d4c2b2f3bc42e2561252455e85208eaa0f8f1cb8a073eb3748624067e"} err="failed to get container status \"82b0475d4c2b2f3bc42e2561252455e85208eaa0f8f1cb8a073eb3748624067e\": rpc error: code = NotFound desc = could not find container \"82b0475d4c2b2f3bc42e2561252455e85208eaa0f8f1cb8a073eb3748624067e\": container with ID starting with 82b0475d4c2b2f3bc42e2561252455e85208eaa0f8f1cb8a073eb3748624067e not found: ID does not exist" Apr 16 19:35:26.569945 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:26.569914 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-psccd_8fd22549-8a71-4c5b-89f5-241942077e63/node-exporter/0.log" Apr 16 19:35:26.593794 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:26.593765 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-psccd_8fd22549-8a71-4c5b-89f5-241942077e63/kube-rbac-proxy/0.log" Apr 16 19:35:26.615612 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:26.615585 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-psccd_8fd22549-8a71-4c5b-89f5-241942077e63/init-textfile/0.log" Apr 16 19:35:26.647683 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:26.647657 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="113f656e-3dd7-43e0-8d9c-ade3142ccc1d" path="/var/lib/kubelet/pods/113f656e-3dd7-43e0-8d9c-ade3142ccc1d/volumes" Apr 16 19:35:28.320478 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:28.320447 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-console_networking-console-plugin-5cb6cf4cb4-p8jnc_9d04db45-c40a-4deb-a86e-03e77a3b560e/networking-console-plugin/0.log" Apr 16 19:35:29.475238 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:29.475205 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dncfj/perf-node-gather-daemonset-5s7ch"] Apr 16 19:35:29.475664 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:29.475469 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="113f656e-3dd7-43e0-8d9c-ade3142ccc1d" containerName="copy" Apr 16 19:35:29.475664 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:29.475480 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="113f656e-3dd7-43e0-8d9c-ade3142ccc1d" containerName="copy" Apr 16 19:35:29.475664 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:29.475497 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="113f656e-3dd7-43e0-8d9c-ade3142ccc1d" containerName="gather" Apr 16 19:35:29.475664 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:29.475502 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="113f656e-3dd7-43e0-8d9c-ade3142ccc1d" containerName="gather" Apr 16 19:35:29.475664 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:29.475544 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="113f656e-3dd7-43e0-8d9c-ade3142ccc1d" containerName="copy" Apr 16 19:35:29.475664 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:29.475552 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="113f656e-3dd7-43e0-8d9c-ade3142ccc1d" containerName="gather" Apr 16 19:35:29.480267 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:29.480247 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dncfj/perf-node-gather-daemonset-5s7ch" Apr 16 19:35:29.482555 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:29.482536 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-dncfj\"/\"kube-root-ca.crt\"" Apr 16 19:35:29.482980 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:29.482963 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-dncfj\"/\"openshift-service-ca.crt\"" Apr 16 19:35:29.483051 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:29.482995 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-dncfj\"/\"default-dockercfg-4wqkq\"" Apr 16 19:35:29.489176 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:29.489150 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-dncfj/perf-node-gather-daemonset-5s7ch"] Apr 16 19:35:29.542179 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:29.542142 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/2dfc8046-0969-4ac0-8f5e-80afef124541-podres\") pod \"perf-node-gather-daemonset-5s7ch\" (UID: \"2dfc8046-0969-4ac0-8f5e-80afef124541\") " pod="openshift-must-gather-dncfj/perf-node-gather-daemonset-5s7ch" Apr 16 19:35:29.542179 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:29.542182 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/2dfc8046-0969-4ac0-8f5e-80afef124541-proc\") pod \"perf-node-gather-daemonset-5s7ch\" (UID: \"2dfc8046-0969-4ac0-8f5e-80afef124541\") " pod="openshift-must-gather-dncfj/perf-node-gather-daemonset-5s7ch" Apr 16 19:35:29.542461 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:29.542212 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2dfc8046-0969-4ac0-8f5e-80afef124541-sys\") pod \"perf-node-gather-daemonset-5s7ch\" (UID: \"2dfc8046-0969-4ac0-8f5e-80afef124541\") " pod="openshift-must-gather-dncfj/perf-node-gather-daemonset-5s7ch" Apr 16 19:35:29.542461 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:29.542244 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2dfc8046-0969-4ac0-8f5e-80afef124541-lib-modules\") pod \"perf-node-gather-daemonset-5s7ch\" (UID: \"2dfc8046-0969-4ac0-8f5e-80afef124541\") " pod="openshift-must-gather-dncfj/perf-node-gather-daemonset-5s7ch" Apr 16 19:35:29.542461 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:29.542262 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvbzd\" (UniqueName: \"kubernetes.io/projected/2dfc8046-0969-4ac0-8f5e-80afef124541-kube-api-access-mvbzd\") pod \"perf-node-gather-daemonset-5s7ch\" (UID: \"2dfc8046-0969-4ac0-8f5e-80afef124541\") " pod="openshift-must-gather-dncfj/perf-node-gather-daemonset-5s7ch" Apr 16 19:35:29.642815 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:29.642774 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/2dfc8046-0969-4ac0-8f5e-80afef124541-podres\") pod \"perf-node-gather-daemonset-5s7ch\" (UID: \"2dfc8046-0969-4ac0-8f5e-80afef124541\") " pod="openshift-must-gather-dncfj/perf-node-gather-daemonset-5s7ch" Apr 16 19:35:29.642815 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:29.642813 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/2dfc8046-0969-4ac0-8f5e-80afef124541-proc\") pod \"perf-node-gather-daemonset-5s7ch\" (UID: \"2dfc8046-0969-4ac0-8f5e-80afef124541\") " pod="openshift-must-gather-dncfj/perf-node-gather-daemonset-5s7ch" Apr 16 19:35:29.643035 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:29.642835 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2dfc8046-0969-4ac0-8f5e-80afef124541-sys\") pod \"perf-node-gather-daemonset-5s7ch\" (UID: \"2dfc8046-0969-4ac0-8f5e-80afef124541\") " pod="openshift-must-gather-dncfj/perf-node-gather-daemonset-5s7ch" Apr 16 19:35:29.643035 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:29.642856 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2dfc8046-0969-4ac0-8f5e-80afef124541-lib-modules\") pod \"perf-node-gather-daemonset-5s7ch\" (UID: \"2dfc8046-0969-4ac0-8f5e-80afef124541\") " pod="openshift-must-gather-dncfj/perf-node-gather-daemonset-5s7ch" Apr 16 19:35:29.643035 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:29.642875 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mvbzd\" (UniqueName: \"kubernetes.io/projected/2dfc8046-0969-4ac0-8f5e-80afef124541-kube-api-access-mvbzd\") pod \"perf-node-gather-daemonset-5s7ch\" (UID: \"2dfc8046-0969-4ac0-8f5e-80afef124541\") " pod="openshift-must-gather-dncfj/perf-node-gather-daemonset-5s7ch" Apr 16 19:35:29.643035 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:29.642904 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/2dfc8046-0969-4ac0-8f5e-80afef124541-proc\") pod \"perf-node-gather-daemonset-5s7ch\" (UID: \"2dfc8046-0969-4ac0-8f5e-80afef124541\") " pod="openshift-must-gather-dncfj/perf-node-gather-daemonset-5s7ch" Apr 16 19:35:29.643035 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:29.642958 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/2dfc8046-0969-4ac0-8f5e-80afef124541-podres\") pod \"perf-node-gather-daemonset-5s7ch\" (UID: \"2dfc8046-0969-4ac0-8f5e-80afef124541\") " pod="openshift-must-gather-dncfj/perf-node-gather-daemonset-5s7ch" Apr 16 19:35:29.643035 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:29.642961 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2dfc8046-0969-4ac0-8f5e-80afef124541-sys\") pod \"perf-node-gather-daemonset-5s7ch\" (UID: \"2dfc8046-0969-4ac0-8f5e-80afef124541\") " pod="openshift-must-gather-dncfj/perf-node-gather-daemonset-5s7ch" Apr 16 19:35:29.643035 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:29.643006 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2dfc8046-0969-4ac0-8f5e-80afef124541-lib-modules\") pod \"perf-node-gather-daemonset-5s7ch\" (UID: \"2dfc8046-0969-4ac0-8f5e-80afef124541\") " pod="openshift-must-gather-dncfj/perf-node-gather-daemonset-5s7ch" Apr 16 19:35:29.654642 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:29.654619 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvbzd\" (UniqueName: \"kubernetes.io/projected/2dfc8046-0969-4ac0-8f5e-80afef124541-kube-api-access-mvbzd\") pod \"perf-node-gather-daemonset-5s7ch\" (UID: \"2dfc8046-0969-4ac0-8f5e-80afef124541\") " pod="openshift-must-gather-dncfj/perf-node-gather-daemonset-5s7ch" Apr 16 19:35:29.789920 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:29.789834 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dncfj/perf-node-gather-daemonset-5s7ch" Apr 16 19:35:29.905880 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:29.905856 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-dncfj/perf-node-gather-daemonset-5s7ch"] Apr 16 19:35:29.908862 ip-10-0-132-14 kubenswrapper[2569]: W0416 19:35:29.908826 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod2dfc8046_0969_4ac0_8f5e_80afef124541.slice/crio-a0817412bccfcbff1d12ed276afc0f9b28bedc1eaf6c14642c2ad25b951d9ef1 WatchSource:0}: Error finding container a0817412bccfcbff1d12ed276afc0f9b28bedc1eaf6c14642c2ad25b951d9ef1: Status 404 returned error can't find the container with id a0817412bccfcbff1d12ed276afc0f9b28bedc1eaf6c14642c2ad25b951d9ef1 Apr 16 19:35:30.437096 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:30.437046 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-wq9r5_4561dc6f-93f8-48ae-a46a-8ae75f78fdb1/dns/0.log" Apr 16 19:35:30.464407 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:30.464368 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-wq9r5_4561dc6f-93f8-48ae-a46a-8ae75f78fdb1/kube-rbac-proxy/0.log" Apr 16 19:35:30.491617 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:30.491574 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dncfj/perf-node-gather-daemonset-5s7ch" event={"ID":"2dfc8046-0969-4ac0-8f5e-80afef124541","Type":"ContainerStarted","Data":"95bf8d337537651e26de5318dd3517d0652404678689c0e9ac278e5f98fb5641"} Apr 16 19:35:30.491617 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:30.491615 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dncfj/perf-node-gather-daemonset-5s7ch" event={"ID":"2dfc8046-0969-4ac0-8f5e-80afef124541","Type":"ContainerStarted","Data":"a0817412bccfcbff1d12ed276afc0f9b28bedc1eaf6c14642c2ad25b951d9ef1"} Apr 16 19:35:30.492035 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:30.491617 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-8h69z_8b1f3fed-8fbc-4087-a06e-b4bb1396ba36/dns-node-resolver/0.log" Apr 16 19:35:30.492035 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:30.491707 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-must-gather-dncfj/perf-node-gather-daemonset-5s7ch" Apr 16 19:35:30.513417 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:30.513377 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-dncfj/perf-node-gather-daemonset-5s7ch" podStartSLOduration=1.5133651129999999 podStartE2EDuration="1.513365113s" podCreationTimestamp="2026-04-16 19:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 19:35:30.511207797 +0000 UTC m=+3914.489291242" watchObservedRunningTime="2026-04-16 19:35:30.513365113 +0000 UTC m=+3914.491448576" Apr 16 19:35:31.072763 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:31.072730 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-rfstz_a4e163bd-89bf-4b55-9d51-38032e333eb1/node-ca/0.log" Apr 16 19:35:32.216368 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:32.216319 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-z87hx_8db23076-0658-4e7c-aab7-30f06e2174dc/serve-healthcheck-canary/0.log" Apr 16 19:35:32.738199 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:32.738145 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-j6xwp_98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96/kube-rbac-proxy/0.log" Apr 16 19:35:32.763862 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:32.763837 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-j6xwp_98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96/exporter/0.log" Apr 16 19:35:32.787569 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:32.787546 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-j6xwp_98ba6330-64cf-4d4c-8ae0-2ddbe0a40d96/extractor/0.log" Apr 16 19:35:34.873285 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:34.873260 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_model-serving-api-86f7b4b499-vtkth_97943f8c-7dba-46b1-ad54-1b60669b36e6/server/0.log" Apr 16 19:35:35.060105 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:35.060069 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_odh-model-controller-696fc77849-jb782_8f05ea0c-23c2-4221-a337-1a09fd938881/manager/0.log" Apr 16 19:35:35.084574 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:35.084549 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_s3-init-f5n5v_69095eac-fc69-48f2-a272-d16b50b3b10c/s3-init/0.log" Apr 16 19:35:35.113681 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:35.113657 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_s3-tls-init-custom-8qnld_5935f990-cf29-4b33-a91d-2dbfbd69678b/s3-tls-init-custom/0.log" Apr 16 19:35:35.141239 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:35.141168 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_s3-tls-init-serving-84sj7_f8db5af2-26ab-4f7b-8609-80b4afc589c7/s3-tls-init-serving/0.log" Apr 16 19:35:35.173148 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:35.173123 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_seaweedfs-86cc847c5c-ss4br_d386611a-e077-450b-b47b-13f74a58b0b6/seaweedfs/0.log" Apr 16 19:35:35.201114 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:35.201096 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_seaweedfs-tls-custom-5c88b85bb7-vdvnz_5b4dc11a-3fab-4a6b-9deb-9bf0c61f6370/seaweedfs-tls-custom/0.log" Apr 16 19:35:35.230613 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:35.230595 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_seaweedfs-tls-serving-7fd5766db9-cl5fg_018134dd-e15a-4828-871f-992e5cd0ac85/seaweedfs-tls-serving/0.log" Apr 16 19:35:36.503517 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:36.503492 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-dncfj/perf-node-gather-daemonset-5s7ch" Apr 16 19:35:40.699001 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:40.698951 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-95kg5_89df2e8c-e3ce-4dda-afe0-e3720c021e56/kube-multus/0.log" Apr 16 19:35:40.746357 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:40.746267 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-d2jts_c9036c3c-a41d-405f-acbf-c30968863203/kube-multus-additional-cni-plugins/0.log" Apr 16 19:35:40.792060 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:40.792035 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-d2jts_c9036c3c-a41d-405f-acbf-c30968863203/egress-router-binary-copy/0.log" Apr 16 19:35:40.840312 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:40.840287 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-d2jts_c9036c3c-a41d-405f-acbf-c30968863203/cni-plugins/0.log" Apr 16 19:35:40.887219 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:40.887190 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-d2jts_c9036c3c-a41d-405f-acbf-c30968863203/bond-cni-plugin/0.log" Apr 16 19:35:40.931204 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:40.931180 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-d2jts_c9036c3c-a41d-405f-acbf-c30968863203/routeoverride-cni/0.log" Apr 16 19:35:40.984326 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:40.984297 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-d2jts_c9036c3c-a41d-405f-acbf-c30968863203/whereabouts-cni-bincopy/0.log" Apr 16 19:35:41.021188 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:41.021116 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-d2jts_c9036c3c-a41d-405f-acbf-c30968863203/whereabouts-cni/0.log" Apr 16 19:35:41.816406 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:41.816370 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-kk4tm_dc3c5cbb-7bc5-4228-88bf-021a899d1e57/network-metrics-daemon/0.log" Apr 16 19:35:41.853477 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:41.853453 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-kk4tm_dc3c5cbb-7bc5-4228-88bf-021a899d1e57/kube-rbac-proxy/0.log" Apr 16 19:35:43.409118 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:43.409077 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s62vp_bb2c10db-7942-42a5-a328-06839f22865c/ovn-controller/0.log" Apr 16 19:35:43.445604 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:43.445573 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s62vp_bb2c10db-7942-42a5-a328-06839f22865c/ovn-acl-logging/0.log" Apr 16 19:35:43.466741 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:43.466713 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s62vp_bb2c10db-7942-42a5-a328-06839f22865c/kube-rbac-proxy-node/0.log" Apr 16 19:35:43.492963 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:43.492933 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s62vp_bb2c10db-7942-42a5-a328-06839f22865c/kube-rbac-proxy-ovn-metrics/0.log" Apr 16 19:35:43.515067 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:43.515043 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s62vp_bb2c10db-7942-42a5-a328-06839f22865c/northd/0.log" Apr 16 19:35:43.537570 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:43.537552 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s62vp_bb2c10db-7942-42a5-a328-06839f22865c/nbdb/0.log" Apr 16 19:35:43.559285 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:43.559268 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s62vp_bb2c10db-7942-42a5-a328-06839f22865c/sbdb/0.log" Apr 16 19:35:43.659733 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:43.659648 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s62vp_bb2c10db-7942-42a5-a328-06839f22865c/ovnkube-controller/0.log" Apr 16 19:35:44.694745 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:44.694719 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-target-tfkdr_687e1330-7999-4eea-a8c8-b11fd9d8448f/network-check-target-container/0.log" Apr 16 19:35:45.675721 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:45.675695 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_iptables-alerter-h4gn9_a54cc5a5-36a6-41a7-bb25-fc1eb332a322/iptables-alerter/0.log" Apr 16 19:35:46.365202 ip-10-0-132-14 kubenswrapper[2569]: I0416 19:35:46.365172 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_tuned-6696h_e30175fe-31a1-408c-bf6b-fcf72a498c7c/tuned/0.log"