Apr 23 17:58:12.566599 ip-10-0-130-162 systemd[1]: Starting Kubernetes Kubelet... Apr 23 17:58:13.069738 ip-10-0-130-162 kubenswrapper[2572]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 23 17:58:13.069738 ip-10-0-130-162 kubenswrapper[2572]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Apr 23 17:58:13.069738 ip-10-0-130-162 kubenswrapper[2572]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 23 17:58:13.069738 ip-10-0-130-162 kubenswrapper[2572]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 23 17:58:13.069738 ip-10-0-130-162 kubenswrapper[2572]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 23 17:58:13.071719 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.071620 2572 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 23 17:58:13.073973 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.073956 2572 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:58:13.073973 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.073973 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:58:13.074032 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.073977 2572 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:58:13.074032 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.073981 2572 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:58:13.074032 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.073984 2572 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:58:13.074032 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.073987 2572 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:58:13.074032 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.073989 2572 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:58:13.074032 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.073992 2572 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:58:13.074032 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.073995 2572 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:58:13.074032 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.073998 2572 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:58:13.074032 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074001 2572 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:58:13.074032 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074003 2572 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:58:13.074032 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074006 2572 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:58:13.074032 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074009 2572 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:58:13.074032 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074012 2572 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:58:13.074032 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074014 2572 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:58:13.074032 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074017 2572 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:58:13.074032 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074019 2572 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:58:13.074032 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074022 2572 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:58:13.074032 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074025 2572 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:58:13.074032 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074027 2572 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:58:13.074032 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074030 2572 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:58:13.074552 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074033 2572 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:58:13.074552 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074036 2572 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:58:13.074552 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074041 2572 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:58:13.074552 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074045 2572 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:58:13.074552 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074052 2572 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:58:13.074552 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074055 2572 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:58:13.074552 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074058 2572 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:58:13.074552 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074061 2572 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:58:13.074552 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074064 2572 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:58:13.074552 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074066 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:58:13.074552 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074069 2572 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:58:13.074552 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074071 2572 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:58:13.074552 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074074 2572 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:58:13.074552 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074077 2572 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:58:13.074552 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074079 2572 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:58:13.074552 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074082 2572 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:58:13.074552 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074084 2572 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:58:13.074552 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074087 2572 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:58:13.074552 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074089 2572 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:58:13.074552 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074092 2572 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:58:13.075044 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074094 2572 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:58:13.075044 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074097 2572 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:58:13.075044 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074099 2572 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:58:13.075044 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074103 2572 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:58:13.075044 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074105 2572 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:58:13.075044 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074108 2572 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:58:13.075044 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074110 2572 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:58:13.075044 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074113 2572 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:58:13.075044 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074116 2572 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:58:13.075044 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074119 2572 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:58:13.075044 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074121 2572 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:58:13.075044 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074125 2572 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:58:13.075044 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074129 2572 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:58:13.075044 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074133 2572 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:58:13.075044 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074136 2572 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:58:13.075044 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074139 2572 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:58:13.075044 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074141 2572 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:58:13.075044 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074144 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:58:13.075044 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074147 2572 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:58:13.075520 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074149 2572 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:58:13.075520 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074152 2572 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:58:13.075520 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074154 2572 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:58:13.075520 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074157 2572 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:58:13.075520 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074159 2572 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:58:13.075520 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074162 2572 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:58:13.075520 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074164 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:58:13.075520 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074167 2572 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:58:13.075520 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074169 2572 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:58:13.075520 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074172 2572 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:58:13.075520 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074175 2572 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:58:13.075520 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074179 2572 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:58:13.075520 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074181 2572 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:58:13.075520 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074184 2572 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:58:13.075520 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074187 2572 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:58:13.075520 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074190 2572 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:58:13.075520 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074192 2572 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:58:13.075520 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074195 2572 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:58:13.075520 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074197 2572 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:58:13.075520 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074200 2572 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:58:13.075999 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074202 2572 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:58:13.075999 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074205 2572 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:58:13.075999 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074207 2572 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:58:13.075999 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074210 2572 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:58:13.075999 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074212 2572 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:58:13.075999 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074618 2572 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:58:13.075999 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074624 2572 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:58:13.075999 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074627 2572 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:58:13.075999 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074630 2572 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:58:13.075999 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074632 2572 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:58:13.075999 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074635 2572 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:58:13.075999 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074638 2572 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:58:13.075999 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074641 2572 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:58:13.075999 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074644 2572 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:58:13.075999 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074647 2572 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:58:13.075999 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074649 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:58:13.075999 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074652 2572 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:58:13.075999 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074654 2572 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:58:13.075999 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074657 2572 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:58:13.076471 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074659 2572 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:58:13.076471 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074662 2572 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:58:13.076471 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074665 2572 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:58:13.076471 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074667 2572 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:58:13.076471 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074670 2572 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:58:13.076471 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074673 2572 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:58:13.076471 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074675 2572 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:58:13.076471 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074680 2572 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:58:13.076471 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074683 2572 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:58:13.076471 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074686 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:58:13.076471 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074688 2572 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:58:13.076471 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074691 2572 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:58:13.076471 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074693 2572 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:58:13.076471 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074696 2572 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:58:13.076471 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074698 2572 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:58:13.076471 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074701 2572 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:58:13.076471 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074703 2572 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:58:13.076471 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074707 2572 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:58:13.076471 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074711 2572 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:58:13.076927 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074725 2572 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:58:13.076927 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074730 2572 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:58:13.076927 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074733 2572 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:58:13.076927 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074736 2572 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:58:13.076927 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074739 2572 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:58:13.076927 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074742 2572 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:58:13.076927 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074752 2572 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:58:13.076927 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074756 2572 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:58:13.076927 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074759 2572 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:58:13.076927 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074762 2572 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:58:13.076927 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074765 2572 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:58:13.076927 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074768 2572 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:58:13.076927 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074771 2572 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:58:13.076927 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074773 2572 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:58:13.076927 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074776 2572 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:58:13.076927 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074779 2572 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:58:13.076927 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074782 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:58:13.076927 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074784 2572 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:58:13.076927 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074787 2572 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:58:13.076927 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074789 2572 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:58:13.077419 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074792 2572 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:58:13.077419 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074795 2572 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:58:13.077419 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074798 2572 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:58:13.077419 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074800 2572 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:58:13.077419 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074803 2572 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:58:13.077419 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074806 2572 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:58:13.077419 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074808 2572 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:58:13.077419 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074812 2572 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:58:13.077419 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074814 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:58:13.077419 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074817 2572 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:58:13.077419 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074819 2572 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:58:13.077419 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074822 2572 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:58:13.077419 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074826 2572 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:58:13.077419 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074828 2572 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:58:13.077419 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074830 2572 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:58:13.077419 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074833 2572 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:58:13.077419 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074835 2572 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:58:13.077419 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074839 2572 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:58:13.077419 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074841 2572 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:58:13.077419 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074844 2572 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:58:13.077907 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074846 2572 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:58:13.077907 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074849 2572 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:58:13.077907 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074851 2572 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:58:13.077907 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074854 2572 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:58:13.077907 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074856 2572 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:58:13.077907 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074859 2572 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:58:13.077907 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074861 2572 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:58:13.077907 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074864 2572 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:58:13.077907 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074866 2572 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:58:13.077907 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074869 2572 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:58:13.077907 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074872 2572 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:58:13.077907 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074874 2572 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:58:13.077907 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.074877 2572 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:58:13.077907 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.074951 2572 flags.go:64] FLAG: --address="0.0.0.0" Apr 23 17:58:13.077907 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.074962 2572 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Apr 23 17:58:13.077907 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.074973 2572 flags.go:64] FLAG: --anonymous-auth="true" Apr 23 17:58:13.077907 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.074980 2572 flags.go:64] FLAG: --application-metrics-count-limit="100" Apr 23 17:58:13.077907 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.074987 2572 flags.go:64] FLAG: --authentication-token-webhook="false" Apr 23 17:58:13.077907 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.074991 2572 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Apr 23 17:58:13.077907 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075001 2572 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Apr 23 17:58:13.077907 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075006 2572 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Apr 23 17:58:13.078433 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075009 2572 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Apr 23 17:58:13.078433 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075013 2572 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Apr 23 17:58:13.078433 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075016 2572 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Apr 23 17:58:13.078433 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075020 2572 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Apr 23 17:58:13.078433 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075023 2572 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Apr 23 17:58:13.078433 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075026 2572 flags.go:64] FLAG: --cgroup-root="" Apr 23 17:58:13.078433 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075029 2572 flags.go:64] FLAG: --cgroups-per-qos="true" Apr 23 17:58:13.078433 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075032 2572 flags.go:64] FLAG: --client-ca-file="" Apr 23 17:58:13.078433 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075034 2572 flags.go:64] FLAG: --cloud-config="" Apr 23 17:58:13.078433 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075037 2572 flags.go:64] FLAG: --cloud-provider="external" Apr 23 17:58:13.078433 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075040 2572 flags.go:64] FLAG: --cluster-dns="[]" Apr 23 17:58:13.078433 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075046 2572 flags.go:64] FLAG: --cluster-domain="" Apr 23 17:58:13.078433 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075050 2572 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Apr 23 17:58:13.078433 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075053 2572 flags.go:64] FLAG: --config-dir="" Apr 23 17:58:13.078433 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075056 2572 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Apr 23 17:58:13.078433 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075060 2572 flags.go:64] FLAG: --container-log-max-files="5" Apr 23 17:58:13.078433 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075064 2572 flags.go:64] FLAG: --container-log-max-size="10Mi" Apr 23 17:58:13.078433 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075067 2572 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Apr 23 17:58:13.078433 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075070 2572 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Apr 23 17:58:13.078433 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075074 2572 flags.go:64] FLAG: --containerd-namespace="k8s.io" Apr 23 17:58:13.078433 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075077 2572 flags.go:64] FLAG: --contention-profiling="false" Apr 23 17:58:13.078433 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075079 2572 flags.go:64] FLAG: --cpu-cfs-quota="true" Apr 23 17:58:13.078433 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075083 2572 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Apr 23 17:58:13.078433 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075086 2572 flags.go:64] FLAG: --cpu-manager-policy="none" Apr 23 17:58:13.078433 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075090 2572 flags.go:64] FLAG: --cpu-manager-policy-options="" Apr 23 17:58:13.079041 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075094 2572 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Apr 23 17:58:13.079041 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075098 2572 flags.go:64] FLAG: --enable-controller-attach-detach="true" Apr 23 17:58:13.079041 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075101 2572 flags.go:64] FLAG: --enable-debugging-handlers="true" Apr 23 17:58:13.079041 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075104 2572 flags.go:64] FLAG: --enable-load-reader="false" Apr 23 17:58:13.079041 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075107 2572 flags.go:64] FLAG: --enable-server="true" Apr 23 17:58:13.079041 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075110 2572 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Apr 23 17:58:13.079041 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075114 2572 flags.go:64] FLAG: --event-burst="100" Apr 23 17:58:13.079041 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075118 2572 flags.go:64] FLAG: --event-qps="50" Apr 23 17:58:13.079041 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075121 2572 flags.go:64] FLAG: --event-storage-age-limit="default=0" Apr 23 17:58:13.079041 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075124 2572 flags.go:64] FLAG: --event-storage-event-limit="default=0" Apr 23 17:58:13.079041 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075126 2572 flags.go:64] FLAG: --eviction-hard="" Apr 23 17:58:13.079041 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075131 2572 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Apr 23 17:58:13.079041 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075134 2572 flags.go:64] FLAG: --eviction-minimum-reclaim="" Apr 23 17:58:13.079041 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075137 2572 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Apr 23 17:58:13.079041 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075140 2572 flags.go:64] FLAG: --eviction-soft="" Apr 23 17:58:13.079041 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075144 2572 flags.go:64] FLAG: --eviction-soft-grace-period="" Apr 23 17:58:13.079041 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075147 2572 flags.go:64] FLAG: --exit-on-lock-contention="false" Apr 23 17:58:13.079041 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075150 2572 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Apr 23 17:58:13.079041 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075153 2572 flags.go:64] FLAG: --experimental-mounter-path="" Apr 23 17:58:13.079041 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075156 2572 flags.go:64] FLAG: --fail-cgroupv1="false" Apr 23 17:58:13.079041 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075159 2572 flags.go:64] FLAG: --fail-swap-on="true" Apr 23 17:58:13.079041 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075162 2572 flags.go:64] FLAG: --feature-gates="" Apr 23 17:58:13.079041 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075165 2572 flags.go:64] FLAG: --file-check-frequency="20s" Apr 23 17:58:13.079041 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075168 2572 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Apr 23 17:58:13.079041 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075171 2572 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Apr 23 17:58:13.079703 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075175 2572 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Apr 23 17:58:13.079703 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075178 2572 flags.go:64] FLAG: --healthz-port="10248" Apr 23 17:58:13.079703 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075181 2572 flags.go:64] FLAG: --help="false" Apr 23 17:58:13.079703 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075186 2572 flags.go:64] FLAG: --hostname-override="ip-10-0-130-162.ec2.internal" Apr 23 17:58:13.079703 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075189 2572 flags.go:64] FLAG: --housekeeping-interval="10s" Apr 23 17:58:13.079703 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075192 2572 flags.go:64] FLAG: --http-check-frequency="20s" Apr 23 17:58:13.079703 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075195 2572 flags.go:64] FLAG: --image-credential-provider-bin-dir="/usr/libexec/kubelet-image-credential-provider-plugins" Apr 23 17:58:13.079703 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075199 2572 flags.go:64] FLAG: --image-credential-provider-config="/etc/kubernetes/credential-providers/ecr-credential-provider.yaml" Apr 23 17:58:13.079703 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075202 2572 flags.go:64] FLAG: --image-gc-high-threshold="85" Apr 23 17:58:13.079703 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075205 2572 flags.go:64] FLAG: --image-gc-low-threshold="80" Apr 23 17:58:13.079703 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075208 2572 flags.go:64] FLAG: --image-service-endpoint="" Apr 23 17:58:13.079703 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075211 2572 flags.go:64] FLAG: --kernel-memcg-notification="false" Apr 23 17:58:13.079703 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075214 2572 flags.go:64] FLAG: --kube-api-burst="100" Apr 23 17:58:13.079703 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075217 2572 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Apr 23 17:58:13.079703 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075220 2572 flags.go:64] FLAG: --kube-api-qps="50" Apr 23 17:58:13.079703 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075226 2572 flags.go:64] FLAG: --kube-reserved="" Apr 23 17:58:13.079703 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075229 2572 flags.go:64] FLAG: --kube-reserved-cgroup="" Apr 23 17:58:13.079703 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075231 2572 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Apr 23 17:58:13.079703 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075234 2572 flags.go:64] FLAG: --kubelet-cgroups="" Apr 23 17:58:13.079703 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075237 2572 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Apr 23 17:58:13.079703 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075240 2572 flags.go:64] FLAG: --lock-file="" Apr 23 17:58:13.079703 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075243 2572 flags.go:64] FLAG: --log-cadvisor-usage="false" Apr 23 17:58:13.079703 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075246 2572 flags.go:64] FLAG: --log-flush-frequency="5s" Apr 23 17:58:13.079703 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075249 2572 flags.go:64] FLAG: --log-json-info-buffer-size="0" Apr 23 17:58:13.080275 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075254 2572 flags.go:64] FLAG: --log-json-split-stream="false" Apr 23 17:58:13.080275 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075257 2572 flags.go:64] FLAG: --log-text-info-buffer-size="0" Apr 23 17:58:13.080275 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075260 2572 flags.go:64] FLAG: --log-text-split-stream="false" Apr 23 17:58:13.080275 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075263 2572 flags.go:64] FLAG: --logging-format="text" Apr 23 17:58:13.080275 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075266 2572 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Apr 23 17:58:13.080275 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075269 2572 flags.go:64] FLAG: --make-iptables-util-chains="true" Apr 23 17:58:13.080275 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075272 2572 flags.go:64] FLAG: --manifest-url="" Apr 23 17:58:13.080275 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075275 2572 flags.go:64] FLAG: --manifest-url-header="" Apr 23 17:58:13.080275 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075279 2572 flags.go:64] FLAG: --max-housekeeping-interval="15s" Apr 23 17:58:13.080275 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075282 2572 flags.go:64] FLAG: --max-open-files="1000000" Apr 23 17:58:13.080275 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075286 2572 flags.go:64] FLAG: --max-pods="110" Apr 23 17:58:13.080275 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075291 2572 flags.go:64] FLAG: --maximum-dead-containers="-1" Apr 23 17:58:13.080275 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075294 2572 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Apr 23 17:58:13.080275 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075296 2572 flags.go:64] FLAG: --memory-manager-policy="None" Apr 23 17:58:13.080275 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075299 2572 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Apr 23 17:58:13.080275 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075302 2572 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Apr 23 17:58:13.080275 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075305 2572 flags.go:64] FLAG: --node-ip="0.0.0.0" Apr 23 17:58:13.080275 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075308 2572 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhel" Apr 23 17:58:13.080275 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075330 2572 flags.go:64] FLAG: --node-status-max-images="50" Apr 23 17:58:13.080275 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075336 2572 flags.go:64] FLAG: --node-status-update-frequency="10s" Apr 23 17:58:13.080275 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075341 2572 flags.go:64] FLAG: --oom-score-adj="-999" Apr 23 17:58:13.080275 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075347 2572 flags.go:64] FLAG: --pod-cidr="" Apr 23 17:58:13.080275 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075351 2572 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8cfe89231412ff3ee8cb6207fa0be33cad0f08e88c9c0f1e9f7e8c6f14d6715" Apr 23 17:58:13.080874 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075358 2572 flags.go:64] FLAG: --pod-manifest-path="" Apr 23 17:58:13.080874 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075361 2572 flags.go:64] FLAG: --pod-max-pids="-1" Apr 23 17:58:13.080874 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075365 2572 flags.go:64] FLAG: --pods-per-core="0" Apr 23 17:58:13.080874 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075368 2572 flags.go:64] FLAG: --port="10250" Apr 23 17:58:13.080874 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075371 2572 flags.go:64] FLAG: --protect-kernel-defaults="false" Apr 23 17:58:13.080874 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075374 2572 flags.go:64] FLAG: --provider-id="aws:///us-east-1a/i-034596505c40da2a8" Apr 23 17:58:13.080874 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075377 2572 flags.go:64] FLAG: --qos-reserved="" Apr 23 17:58:13.080874 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075381 2572 flags.go:64] FLAG: --read-only-port="10255" Apr 23 17:58:13.080874 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075384 2572 flags.go:64] FLAG: --register-node="true" Apr 23 17:58:13.080874 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075387 2572 flags.go:64] FLAG: --register-schedulable="true" Apr 23 17:58:13.080874 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075389 2572 flags.go:64] FLAG: --register-with-taints="" Apr 23 17:58:13.080874 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075393 2572 flags.go:64] FLAG: --registry-burst="10" Apr 23 17:58:13.080874 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075396 2572 flags.go:64] FLAG: --registry-qps="5" Apr 23 17:58:13.080874 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075399 2572 flags.go:64] FLAG: --reserved-cpus="" Apr 23 17:58:13.080874 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075402 2572 flags.go:64] FLAG: --reserved-memory="" Apr 23 17:58:13.080874 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075406 2572 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Apr 23 17:58:13.080874 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075409 2572 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Apr 23 17:58:13.080874 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075412 2572 flags.go:64] FLAG: --rotate-certificates="false" Apr 23 17:58:13.080874 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075415 2572 flags.go:64] FLAG: --rotate-server-certificates="false" Apr 23 17:58:13.080874 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075418 2572 flags.go:64] FLAG: --runonce="false" Apr 23 17:58:13.080874 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075420 2572 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Apr 23 17:58:13.080874 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075425 2572 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Apr 23 17:58:13.080874 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075428 2572 flags.go:64] FLAG: --seccomp-default="false" Apr 23 17:58:13.080874 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075431 2572 flags.go:64] FLAG: --serialize-image-pulls="true" Apr 23 17:58:13.080874 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075434 2572 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Apr 23 17:58:13.080874 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075438 2572 flags.go:64] FLAG: --storage-driver-db="cadvisor" Apr 23 17:58:13.081509 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075442 2572 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Apr 23 17:58:13.081509 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075445 2572 flags.go:64] FLAG: --storage-driver-password="root" Apr 23 17:58:13.081509 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075448 2572 flags.go:64] FLAG: --storage-driver-secure="false" Apr 23 17:58:13.081509 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075451 2572 flags.go:64] FLAG: --storage-driver-table="stats" Apr 23 17:58:13.081509 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075454 2572 flags.go:64] FLAG: --storage-driver-user="root" Apr 23 17:58:13.081509 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075457 2572 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Apr 23 17:58:13.081509 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075464 2572 flags.go:64] FLAG: --sync-frequency="1m0s" Apr 23 17:58:13.081509 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075468 2572 flags.go:64] FLAG: --system-cgroups="" Apr 23 17:58:13.081509 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075470 2572 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Apr 23 17:58:13.081509 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075476 2572 flags.go:64] FLAG: --system-reserved-cgroup="" Apr 23 17:58:13.081509 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075478 2572 flags.go:64] FLAG: --tls-cert-file="" Apr 23 17:58:13.081509 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075482 2572 flags.go:64] FLAG: --tls-cipher-suites="[]" Apr 23 17:58:13.081509 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075487 2572 flags.go:64] FLAG: --tls-min-version="" Apr 23 17:58:13.081509 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075490 2572 flags.go:64] FLAG: --tls-private-key-file="" Apr 23 17:58:13.081509 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075493 2572 flags.go:64] FLAG: --topology-manager-policy="none" Apr 23 17:58:13.081509 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075496 2572 flags.go:64] FLAG: --topology-manager-policy-options="" Apr 23 17:58:13.081509 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075499 2572 flags.go:64] FLAG: --topology-manager-scope="container" Apr 23 17:58:13.081509 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075502 2572 flags.go:64] FLAG: --v="2" Apr 23 17:58:13.081509 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075506 2572 flags.go:64] FLAG: --version="false" Apr 23 17:58:13.081509 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075510 2572 flags.go:64] FLAG: --vmodule="" Apr 23 17:58:13.081509 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075514 2572 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Apr 23 17:58:13.081509 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.075517 2572 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Apr 23 17:58:13.081509 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075617 2572 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:58:13.081509 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075621 2572 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:58:13.081509 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075624 2572 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:58:13.082382 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075626 2572 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:58:13.082382 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075629 2572 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:58:13.082382 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075633 2572 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:58:13.082382 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075636 2572 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:58:13.082382 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075638 2572 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:58:13.082382 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075641 2572 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:58:13.082382 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075644 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:58:13.082382 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075646 2572 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:58:13.082382 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075649 2572 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:58:13.082382 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075652 2572 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:58:13.082382 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075655 2572 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:58:13.082382 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075658 2572 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:58:13.082382 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075660 2572 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:58:13.082382 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075664 2572 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:58:13.082382 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075667 2572 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:58:13.082382 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075669 2572 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:58:13.082382 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075672 2572 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:58:13.082382 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075674 2572 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:58:13.082382 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075677 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:58:13.082382 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075680 2572 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:58:13.083260 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075682 2572 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:58:13.083260 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075685 2572 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:58:13.083260 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075687 2572 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:58:13.083260 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075690 2572 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:58:13.083260 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075692 2572 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:58:13.083260 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075695 2572 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:58:13.083260 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075697 2572 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:58:13.083260 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075700 2572 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:58:13.083260 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075702 2572 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:58:13.083260 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075705 2572 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:58:13.083260 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075708 2572 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:58:13.083260 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075710 2572 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:58:13.083260 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075713 2572 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:58:13.083260 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075715 2572 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:58:13.083260 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075719 2572 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:58:13.083260 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075721 2572 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:58:13.083260 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075724 2572 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:58:13.083260 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075726 2572 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:58:13.083260 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075729 2572 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:58:13.084025 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075732 2572 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:58:13.084025 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075735 2572 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:58:13.084025 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075737 2572 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:58:13.084025 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075741 2572 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:58:13.084025 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075746 2572 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:58:13.084025 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075750 2572 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:58:13.084025 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075755 2572 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:58:13.084025 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075758 2572 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:58:13.084025 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075761 2572 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:58:13.084025 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075764 2572 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:58:13.084025 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075768 2572 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:58:13.084025 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075770 2572 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:58:13.084025 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075773 2572 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:58:13.084025 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075775 2572 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:58:13.084025 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075778 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:58:13.084025 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075780 2572 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:58:13.084025 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075783 2572 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:58:13.084025 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075786 2572 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:58:13.084025 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075788 2572 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:58:13.084571 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075791 2572 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:58:13.084571 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075793 2572 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:58:13.084571 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075796 2572 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:58:13.084571 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075799 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:58:13.084571 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075801 2572 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:58:13.084571 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075804 2572 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:58:13.084571 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075806 2572 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:58:13.084571 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075809 2572 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:58:13.084571 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075812 2572 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:58:13.084571 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075815 2572 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:58:13.084571 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075817 2572 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:58:13.084571 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075820 2572 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:58:13.084571 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075823 2572 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:58:13.084571 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075825 2572 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:58:13.084571 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075828 2572 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:58:13.084571 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075830 2572 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:58:13.084571 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075833 2572 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:58:13.084571 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075835 2572 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:58:13.084571 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075838 2572 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:58:13.084571 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075842 2572 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:58:13.085345 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075845 2572 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:58:13.085345 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075847 2572 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:58:13.085345 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075850 2572 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:58:13.085345 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075852 2572 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:58:13.085345 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.075855 2572 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:58:13.085345 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.076751 2572 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 23 17:58:13.085345 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.084732 2572 server.go:530] "Kubelet version" kubeletVersion="v1.33.9" Apr 23 17:58:13.085345 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.084890 2572 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 23 17:58:13.085345 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.084974 2572 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:58:13.085345 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.084982 2572 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:58:13.085345 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.084988 2572 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:58:13.085345 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.084993 2572 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:58:13.085345 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.084998 2572 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:58:13.085345 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085004 2572 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:58:13.085345 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085009 2572 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:58:13.085345 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085013 2572 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:58:13.085832 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085017 2572 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:58:13.085832 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085024 2572 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:58:13.085832 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085031 2572 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:58:13.085832 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085035 2572 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:58:13.085832 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085039 2572 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:58:13.085832 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085043 2572 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:58:13.085832 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085047 2572 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:58:13.085832 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085052 2572 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:58:13.085832 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085057 2572 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:58:13.085832 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085061 2572 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:58:13.085832 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085065 2572 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:58:13.085832 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085071 2572 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:58:13.085832 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085075 2572 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:58:13.085832 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085079 2572 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:58:13.085832 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085084 2572 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:58:13.085832 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085088 2572 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:58:13.085832 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085093 2572 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:58:13.085832 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085098 2572 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:58:13.085832 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085102 2572 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:58:13.085832 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085106 2572 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:58:13.086329 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085110 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:58:13.086329 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085114 2572 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:58:13.086329 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085119 2572 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:58:13.086329 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085125 2572 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:58:13.086329 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085130 2572 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:58:13.086329 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085134 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:58:13.086329 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085138 2572 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:58:13.086329 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085142 2572 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:58:13.086329 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085146 2572 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:58:13.086329 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085151 2572 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:58:13.086329 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085156 2572 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:58:13.086329 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085160 2572 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:58:13.086329 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085164 2572 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:58:13.086329 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085169 2572 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:58:13.086329 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085174 2572 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:58:13.086329 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085179 2572 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:58:13.086329 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085183 2572 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:58:13.086329 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085188 2572 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:58:13.086329 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085192 2572 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:58:13.086329 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085196 2572 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:58:13.087047 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085200 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:58:13.087047 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085204 2572 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:58:13.087047 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085208 2572 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:58:13.087047 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085212 2572 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:58:13.087047 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085216 2572 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:58:13.087047 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085221 2572 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:58:13.087047 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085225 2572 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:58:13.087047 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085229 2572 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:58:13.087047 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085236 2572 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:58:13.087047 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085242 2572 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:58:13.087047 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085247 2572 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:58:13.087047 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085251 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:58:13.087047 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085255 2572 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:58:13.087047 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085260 2572 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:58:13.087047 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085265 2572 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:58:13.087047 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085270 2572 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:58:13.087047 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085276 2572 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:58:13.087047 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085280 2572 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:58:13.087047 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085284 2572 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:58:13.087047 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085288 2572 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:58:13.087624 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085292 2572 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:58:13.087624 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085297 2572 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:58:13.087624 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085301 2572 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:58:13.087624 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085305 2572 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:58:13.087624 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085309 2572 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:58:13.087624 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085313 2572 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:58:13.087624 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085334 2572 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:58:13.087624 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085339 2572 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:58:13.087624 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085343 2572 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:58:13.087624 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085346 2572 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:58:13.087624 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085350 2572 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:58:13.087624 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085354 2572 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:58:13.087624 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085358 2572 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:58:13.087624 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085362 2572 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:58:13.087624 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085366 2572 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:58:13.087624 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085370 2572 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:58:13.087624 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085375 2572 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:58:13.087624 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.085379 2572 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:58:13.088130 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.085387 2572 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 23 17:58:13.088130 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086639 2572 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:58:13.088130 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086650 2572 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:58:13.088130 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086655 2572 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:58:13.088130 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086660 2572 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:58:13.088130 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086666 2572 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:58:13.088130 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086674 2572 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:58:13.088130 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086679 2572 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:58:13.088130 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086683 2572 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:58:13.088130 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086687 2572 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:58:13.088130 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086692 2572 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:58:13.088130 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086698 2572 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:58:13.088130 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086702 2572 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:58:13.088130 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086706 2572 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:58:13.088130 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086711 2572 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:58:13.088683 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086723 2572 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:58:13.088683 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086728 2572 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:58:13.088683 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086732 2572 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:58:13.088683 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086736 2572 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:58:13.088683 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086740 2572 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:58:13.088683 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086745 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:58:13.088683 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086749 2572 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:58:13.088683 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086753 2572 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:58:13.088683 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086757 2572 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:58:13.088683 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086761 2572 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:58:13.088683 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086766 2572 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:58:13.088683 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086770 2572 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:58:13.088683 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086774 2572 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:58:13.088683 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086778 2572 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:58:13.088683 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086782 2572 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:58:13.088683 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086787 2572 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:58:13.088683 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086791 2572 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:58:13.088683 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086795 2572 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:58:13.088683 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086798 2572 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:58:13.088683 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086803 2572 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:58:13.089301 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086809 2572 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:58:13.089301 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086814 2572 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:58:13.089301 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086819 2572 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:58:13.089301 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086824 2572 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:58:13.089301 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086828 2572 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:58:13.089301 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086833 2572 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:58:13.089301 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086837 2572 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:58:13.089301 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086842 2572 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:58:13.089301 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086847 2572 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:58:13.089301 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086857 2572 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:58:13.089301 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086862 2572 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:58:13.089301 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086867 2572 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:58:13.089301 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086872 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:58:13.089301 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086876 2572 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:58:13.089301 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086880 2572 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:58:13.089301 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086884 2572 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:58:13.089301 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086888 2572 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:58:13.089301 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086892 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:58:13.089301 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086896 2572 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:58:13.089774 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086900 2572 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:58:13.089774 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086904 2572 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:58:13.089774 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086908 2572 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:58:13.089774 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086913 2572 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:58:13.089774 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086917 2572 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:58:13.089774 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086921 2572 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:58:13.089774 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086925 2572 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:58:13.089774 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086929 2572 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:58:13.089774 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086934 2572 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:58:13.089774 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086938 2572 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:58:13.089774 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086942 2572 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:58:13.089774 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086946 2572 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:58:13.089774 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086950 2572 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:58:13.089774 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086955 2572 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:58:13.089774 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086959 2572 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:58:13.089774 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086963 2572 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:58:13.089774 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086967 2572 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:58:13.089774 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086971 2572 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:58:13.089774 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086975 2572 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:58:13.089774 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086980 2572 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:58:13.090282 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086985 2572 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:58:13.090282 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086989 2572 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:58:13.090282 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086993 2572 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:58:13.090282 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.086998 2572 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:58:13.090282 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.087002 2572 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:58:13.090282 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.087006 2572 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:58:13.090282 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.087010 2572 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:58:13.090282 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.087014 2572 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:58:13.090282 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.087018 2572 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:58:13.090282 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.087022 2572 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:58:13.090282 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.087026 2572 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:58:13.090282 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.087030 2572 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:58:13.090282 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:13.087034 2572 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:58:13.090282 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.087042 2572 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 23 17:58:13.090282 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.087916 2572 server.go:962] "Client rotation is on, will bootstrap in background" Apr 23 17:58:13.093198 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.093179 2572 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Apr 23 17:58:13.094370 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.094357 2572 server.go:1019] "Starting client certificate rotation" Apr 23 17:58:13.094468 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.094450 2572 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 23 17:58:13.094507 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.094489 2572 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 23 17:58:13.126531 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.126509 2572 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 23 17:58:13.131902 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.131883 2572 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 23 17:58:13.146638 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.146616 2572 log.go:25] "Validated CRI v1 runtime API" Apr 23 17:58:13.153039 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.153022 2572 log.go:25] "Validated CRI v1 image API" Apr 23 17:58:13.157374 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.157353 2572 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 23 17:58:13.160022 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.160003 2572 fs.go:135] Filesystem UUIDs: map[4fa4b43d-a822-410a-a6d3-f5b00a68bef6:/dev/nvme0n1p4 60930929-5f90-45de-b602-0c49b6ddd850:/dev/nvme0n1p3 7B77-95E7:/dev/nvme0n1p2] Apr 23 17:58:13.160097 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.160021 2572 fs.go:136] Filesystem partitions: map[/dev/nvme0n1p3:{mountpoint:/boot major:259 minor:3 fsType:ext4 blockSize:0} /dev/nvme0n1p4:{mountpoint:/var major:259 minor:4 fsType:xfs blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Apr 23 17:58:13.162425 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.162404 2572 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 23 17:58:13.165742 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.165638 2572 manager.go:217] Machine: {Timestamp:2026-04-23 17:58:13.163964007 +0000 UTC m=+0.462210131 CPUVendorID:GenuineIntel NumCores:8 NumPhysicalCores:4 NumSockets:1 CpuFrequency:3100538 MemoryCapacity:33164492800 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ec2247b752c4e15ae97d68c851e7e299 SystemUUID:ec2247b7-52c4-e15a-e97d-68c851e7e299 BootID:b22c832c-1f64-4d86-9dc3-abacf3b20a05 Filesystems:[{Device:/dev/nvme0n1p3 DeviceMajor:259 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16582246400 Type:vfs Inodes:4048400 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6632898560 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/nvme0n1p4 DeviceMajor:259 DeviceMinor:4 Capacity:128243970048 Type:vfs Inodes:62651840 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6103040 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16582246400 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[259:0:{Name:nvme0n1 Major:259 Minor:0 Size:128849018880 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:02:b0:8f:2c:4b:6d Speed:0 Mtu:9001} {Name:ens5 MacAddress:02:b0:8f:2c:4b:6d Speed:0 Mtu:9001} {Name:ovs-system MacAddress:be:02:5d:55:c5:25 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33164492800 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0 4] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:1 Threads:[1 5] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:2 Threads:[2 6] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:3 Threads:[3 7] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:}] Caches:[{Id:0 Size:37486592 Type:Unified Level:3}] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Apr 23 17:58:13.166785 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.166774 2572 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Apr 23 17:58:13.166915 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.166898 2572 manager.go:233] Version: {KernelVersion:5.14.0-570.107.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20260414-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Apr 23 17:58:13.169276 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.169252 2572 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 23 17:58:13.169429 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.169279 2572 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-10-0-130-162.ec2.internal","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 23 17:58:13.169473 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.169435 2572 topology_manager.go:138] "Creating topology manager with none policy" Apr 23 17:58:13.169473 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.169444 2572 container_manager_linux.go:306] "Creating device plugin manager" Apr 23 17:58:13.169473 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.169461 2572 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 23 17:58:13.170509 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.170499 2572 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 23 17:58:13.171384 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.171374 2572 state_mem.go:36] "Initialized new in-memory state store" Apr 23 17:58:13.171501 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.171492 2572 server.go:1267] "Using root directory" path="/var/lib/kubelet" Apr 23 17:58:13.174966 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.174956 2572 kubelet.go:491] "Attempting to sync node with API server" Apr 23 17:58:13.175005 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.174970 2572 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 23 17:58:13.175005 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.174986 2572 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Apr 23 17:58:13.175005 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.174997 2572 kubelet.go:397] "Adding apiserver pod source" Apr 23 17:58:13.175005 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.175004 2572 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 23 17:58:13.176154 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.176142 2572 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 23 17:58:13.176199 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.176161 2572 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 23 17:58:13.181087 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.181072 2572 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.10-2.rhaos4.20.gita4d0894.el9" apiVersion="v1" Apr 23 17:58:13.182573 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.182559 2572 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 23 17:58:13.184612 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.184594 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Apr 23 17:58:13.184681 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.184620 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Apr 23 17:58:13.184681 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.184631 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Apr 23 17:58:13.184681 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.184642 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Apr 23 17:58:13.184681 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.184651 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Apr 23 17:58:13.184681 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.184667 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Apr 23 17:58:13.184681 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.184676 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Apr 23 17:58:13.184893 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.184686 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Apr 23 17:58:13.184893 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.184698 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Apr 23 17:58:13.184893 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.184709 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Apr 23 17:58:13.184893 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.184722 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Apr 23 17:58:13.184893 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.184737 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Apr 23 17:58:13.187443 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.187424 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Apr 23 17:58:13.187503 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.187478 2572 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Apr 23 17:58:13.189821 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.189798 2572 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-130-162.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:58:13.190960 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:13.190924 2572 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-130-162.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:58:13.191038 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:13.190923 2572 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:58:13.191301 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.191289 2572 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 23 17:58:13.191348 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.191342 2572 server.go:1295] "Started kubelet" Apr 23 17:58:13.191430 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.191409 2572 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 23 17:58:13.191525 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.191483 2572 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 23 17:58:13.191581 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.191554 2572 server_v1.go:47] "podresources" method="list" useActivePods=true Apr 23 17:58:13.192206 ip-10-0-130-162 systemd[1]: Started Kubernetes Kubelet. Apr 23 17:58:13.192993 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.192967 2572 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 23 17:58:13.194371 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.194351 2572 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-kcr87" Apr 23 17:58:13.194596 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.194584 2572 server.go:317] "Adding debug handlers to kubelet server" Apr 23 17:58:13.198561 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.198541 2572 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Apr 23 17:58:13.198999 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.198984 2572 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 23 17:58:13.199420 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:13.198257 2572 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-130-162.ec2.internal.18a90e280f39d6c1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-130-162.ec2.internal,UID:ip-10-0-130-162.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-10-0-130-162.ec2.internal,},FirstTimestamp:2026-04-23 17:58:13.191300801 +0000 UTC m=+0.489546925,LastTimestamp:2026-04-23 17:58:13.191300801 +0000 UTC m=+0.489546925,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-130-162.ec2.internal,}" Apr 23 17:58:13.199748 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.199732 2572 volume_manager.go:295] "The desired_state_of_world populator starts" Apr 23 17:58:13.199748 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.199748 2572 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 23 17:58:13.199874 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.199748 2572 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 23 17:58:13.199874 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.199845 2572 reconstruct.go:97] "Volume reconstruction finished" Apr 23 17:58:13.199874 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.199853 2572 reconciler.go:26] "Reconciler: start to sync state" Apr 23 17:58:13.200044 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:13.199979 2572 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-130-162.ec2.internal\" not found" Apr 23 17:58:13.200652 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.200629 2572 factory.go:153] Registering CRI-O factory Apr 23 17:58:13.200747 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.200700 2572 factory.go:223] Registration of the crio container factory successfully Apr 23 17:58:13.200799 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.200759 2572 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Apr 23 17:58:13.200799 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.200769 2572 factory.go:55] Registering systemd factory Apr 23 17:58:13.200799 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.200777 2572 factory.go:223] Registration of the systemd container factory successfully Apr 23 17:58:13.200799 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.200796 2572 factory.go:103] Registering Raw factory Apr 23 17:58:13.200979 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.200811 2572 manager.go:1196] Started watching for new ooms in manager Apr 23 17:58:13.203727 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.203711 2572 manager.go:319] Starting recovery of all containers Apr 23 17:58:13.204524 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.204503 2572 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-kcr87" Apr 23 17:58:13.205045 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:13.205018 2572 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-130-162.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Apr 23 17:58:13.205162 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:13.205131 2572 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:58:13.214528 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.214510 2572 manager.go:324] Recovery completed Apr 23 17:58:13.219078 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.219059 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:58:13.221432 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.221417 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-162.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:58:13.221519 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.221450 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-162.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:58:13.221519 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.221466 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-162.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:58:13.221940 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.221923 2572 cpu_manager.go:222] "Starting CPU manager" policy="none" Apr 23 17:58:13.221940 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.221937 2572 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Apr 23 17:58:13.222043 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.221952 2572 state_mem.go:36] "Initialized new in-memory state store" Apr 23 17:58:13.224023 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:13.223953 2572 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-130-162.ec2.internal.18a90e2811059f36 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-130-162.ec2.internal,UID:ip-10-0-130-162.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-130-162.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-130-162.ec2.internal,},FirstTimestamp:2026-04-23 17:58:13.221433142 +0000 UTC m=+0.519679272,LastTimestamp:2026-04-23 17:58:13.221433142 +0000 UTC m=+0.519679272,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-130-162.ec2.internal,}" Apr 23 17:58:13.224125 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.224027 2572 policy_none.go:49] "None policy: Start" Apr 23 17:58:13.224125 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.224039 2572 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 23 17:58:13.224125 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.224049 2572 state_mem.go:35] "Initializing new in-memory state store" Apr 23 17:58:13.267871 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.267854 2572 manager.go:341] "Starting Device Plugin manager" Apr 23 17:58:13.278046 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:13.267917 2572 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 23 17:58:13.278046 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.267932 2572 server.go:85] "Starting device plugin registration server" Apr 23 17:58:13.278046 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.268140 2572 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 23 17:58:13.278046 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.268150 2572 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 23 17:58:13.278046 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.268250 2572 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Apr 23 17:58:13.278046 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.268345 2572 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Apr 23 17:58:13.278046 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.268354 2572 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 23 17:58:13.278046 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:13.268986 2572 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Apr 23 17:58:13.278046 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:13.269025 2572 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-130-162.ec2.internal\" not found" Apr 23 17:58:13.302058 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.302034 2572 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 23 17:58:13.303245 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.303221 2572 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 23 17:58:13.303314 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.303255 2572 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 23 17:58:13.303314 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.303286 2572 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 23 17:58:13.303314 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.303301 2572 kubelet.go:2451] "Starting kubelet main sync loop" Apr 23 17:58:13.303461 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:13.303401 2572 kubelet.go:2475] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Apr 23 17:58:13.305159 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.305140 2572 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:58:13.369040 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.368981 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:58:13.370263 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.370235 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-162.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:58:13.370263 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.370264 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-162.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:58:13.370406 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.370276 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-162.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:58:13.370406 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.370299 2572 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-130-162.ec2.internal" Apr 23 17:58:13.376153 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.376136 2572 kubelet_node_status.go:81] "Successfully registered node" node="ip-10-0-130-162.ec2.internal" Apr 23 17:58:13.376200 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:13.376160 2572 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-130-162.ec2.internal\": node \"ip-10-0-130-162.ec2.internal\" not found" Apr 23 17:58:13.391433 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:13.391412 2572 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-130-162.ec2.internal\" not found" Apr 23 17:58:13.403815 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.403794 2572 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-130-162.ec2.internal","kube-system/kube-apiserver-proxy-ip-10-0-130-162.ec2.internal"] Apr 23 17:58:13.403888 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.403853 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:58:13.405985 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.405962 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-162.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:58:13.406067 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.405992 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-162.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:58:13.406067 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.406008 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-162.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:58:13.407287 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.407273 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:58:13.407406 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.407391 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-130-162.ec2.internal" Apr 23 17:58:13.407464 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.407434 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:58:13.408086 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.408070 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-162.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:58:13.408159 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.408091 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-162.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:58:13.408159 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.408101 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-162.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:58:13.408159 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.408134 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-162.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:58:13.408159 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.408153 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-162.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:58:13.408298 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.408173 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-162.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:58:13.409484 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.409470 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-130-162.ec2.internal" Apr 23 17:58:13.409551 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.409493 2572 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:58:13.410216 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.410201 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-162.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:58:13.410288 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.410231 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-162.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:58:13.410288 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.410246 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-162.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:58:13.438903 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:13.438882 2572 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-130-162.ec2.internal\" not found" node="ip-10-0-130-162.ec2.internal" Apr 23 17:58:13.443109 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:13.443093 2572 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-130-162.ec2.internal\" not found" node="ip-10-0-130-162.ec2.internal" Apr 23 17:58:13.491617 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:13.491595 2572 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-130-162.ec2.internal\" not found" Apr 23 17:58:13.592113 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:13.592066 2572 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-130-162.ec2.internal\" not found" Apr 23 17:58:13.601404 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.601385 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/0b2178ea263236be918f2343e8d4bd48-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-130-162.ec2.internal\" (UID: \"0b2178ea263236be918f2343e8d4bd48\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-130-162.ec2.internal" Apr 23 17:58:13.601467 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.601413 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0b2178ea263236be918f2343e8d4bd48-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-130-162.ec2.internal\" (UID: \"0b2178ea263236be918f2343e8d4bd48\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-130-162.ec2.internal" Apr 23 17:58:13.601467 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.601430 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/55d1c989d003a1c5d6c5adfec051c073-config\") pod \"kube-apiserver-proxy-ip-10-0-130-162.ec2.internal\" (UID: \"55d1c989d003a1c5d6c5adfec051c073\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-130-162.ec2.internal" Apr 23 17:58:13.692848 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:13.692772 2572 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-130-162.ec2.internal\" not found" Apr 23 17:58:13.702114 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.702097 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/0b2178ea263236be918f2343e8d4bd48-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-130-162.ec2.internal\" (UID: \"0b2178ea263236be918f2343e8d4bd48\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-130-162.ec2.internal" Apr 23 17:58:13.702167 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.702120 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0b2178ea263236be918f2343e8d4bd48-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-130-162.ec2.internal\" (UID: \"0b2178ea263236be918f2343e8d4bd48\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-130-162.ec2.internal" Apr 23 17:58:13.702167 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.702138 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/55d1c989d003a1c5d6c5adfec051c073-config\") pod \"kube-apiserver-proxy-ip-10-0-130-162.ec2.internal\" (UID: \"55d1c989d003a1c5d6c5adfec051c073\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-130-162.ec2.internal" Apr 23 17:58:13.702229 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.702205 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/0b2178ea263236be918f2343e8d4bd48-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-130-162.ec2.internal\" (UID: \"0b2178ea263236be918f2343e8d4bd48\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-130-162.ec2.internal" Apr 23 17:58:13.702263 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.702208 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/55d1c989d003a1c5d6c5adfec051c073-config\") pod \"kube-apiserver-proxy-ip-10-0-130-162.ec2.internal\" (UID: \"55d1c989d003a1c5d6c5adfec051c073\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-130-162.ec2.internal" Apr 23 17:58:13.702263 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.702208 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0b2178ea263236be918f2343e8d4bd48-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-130-162.ec2.internal\" (UID: \"0b2178ea263236be918f2343e8d4bd48\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-130-162.ec2.internal" Apr 23 17:58:13.741255 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.741228 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-130-162.ec2.internal" Apr 23 17:58:13.745951 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:13.745932 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-130-162.ec2.internal" Apr 23 17:58:13.793162 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:13.793134 2572 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-130-162.ec2.internal\" not found" Apr 23 17:58:13.893652 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:13.893618 2572 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-130-162.ec2.internal\" not found" Apr 23 17:58:13.994147 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:13.994118 2572 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-130-162.ec2.internal\" not found" Apr 23 17:58:14.088383 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:14.088355 2572 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:58:14.093759 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:14.093735 2572 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Apr 23 17:58:14.093891 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:14.093845 2572 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Apr 23 17:58:14.093950 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:14.093902 2572 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Apr 23 17:58:14.094818 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:14.094800 2572 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-130-162.ec2.internal\" not found" Apr 23 17:58:14.143445 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:14.143423 2572 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:58:14.195228 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:14.195203 2572 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-130-162.ec2.internal\" not found" Apr 23 17:58:14.198688 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:14.198666 2572 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Apr 23 17:58:14.208815 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:14.208797 2572 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 23 17:58:14.210690 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:14.210667 2572 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2028-04-22 17:53:13 +0000 UTC" deadline="2027-11-30 05:47:13.77848937 +0000 UTC" Apr 23 17:58:14.210746 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:14.210690 2572 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="14051h48m59.567801998s" Apr 23 17:58:14.232501 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:14.232470 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b2178ea263236be918f2343e8d4bd48.slice/crio-af0a003e58035c9c67a50f7732fba536026cc7d46292a88f172161c927ff1cea WatchSource:0}: Error finding container af0a003e58035c9c67a50f7732fba536026cc7d46292a88f172161c927ff1cea: Status 404 returned error can't find the container with id af0a003e58035c9c67a50f7732fba536026cc7d46292a88f172161c927ff1cea Apr 23 17:58:14.232814 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:14.232796 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55d1c989d003a1c5d6c5adfec051c073.slice/crio-b94cabab537260387f3b1690a5dea144c0414a9a2c1879164ff439c069df6fd2 WatchSource:0}: Error finding container b94cabab537260387f3b1690a5dea144c0414a9a2c1879164ff439c069df6fd2: Status 404 returned error can't find the container with id b94cabab537260387f3b1690a5dea144c0414a9a2c1879164ff439c069df6fd2 Apr 23 17:58:14.233132 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:14.233116 2572 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-fwjxp" Apr 23 17:58:14.237049 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:14.237036 2572 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 17:58:14.240358 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:14.240341 2572 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-fwjxp" Apr 23 17:58:14.295841 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:14.295780 2572 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-130-162.ec2.internal\" not found" Apr 23 17:58:14.305917 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:14.305859 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-130-162.ec2.internal" event={"ID":"0b2178ea263236be918f2343e8d4bd48","Type":"ContainerStarted","Data":"af0a003e58035c9c67a50f7732fba536026cc7d46292a88f172161c927ff1cea"} Apr 23 17:58:14.306860 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:14.306840 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-130-162.ec2.internal" event={"ID":"55d1c989d003a1c5d6c5adfec051c073","Type":"ContainerStarted","Data":"b94cabab537260387f3b1690a5dea144c0414a9a2c1879164ff439c069df6fd2"} Apr 23 17:58:14.336937 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:14.336918 2572 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:58:14.399502 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:14.399479 2572 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-130-162.ec2.internal" Apr 23 17:58:14.413555 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:14.413539 2572 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 23 17:58:14.415154 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:14.415143 2572 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-130-162.ec2.internal" Apr 23 17:58:14.424139 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:14.424125 2572 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 23 17:58:15.176245 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.176214 2572 apiserver.go:52] "Watching apiserver" Apr 23 17:58:15.185087 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.185058 2572 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Apr 23 17:58:15.185538 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.185509 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-9d9xv","openshift-monitoring/node-exporter-b6xsg","openshift-multus/multus-additional-cni-plugins-lg6b8","openshift-multus/network-metrics-daemon-nh2kn","openshift-network-operator/iptables-alerter-vdzr8","openshift-ovn-kubernetes/ovnkube-node-rz688","kube-system/konnectivity-agent-4vkdl","openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-sb4f6","openshift-cluster-node-tuning-operator/tuned-rf4xp","openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-130-162.ec2.internal","openshift-multus/multus-69j8s","openshift-network-diagnostics/network-check-target-dpfbr","kube-system/kube-apiserver-proxy-ip-10-0-130-162.ec2.internal","openshift-dns/node-resolver-mggbx"] Apr 23 17:58:15.187122 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.187100 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-4vkdl" Apr 23 17:58:15.189939 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.189918 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"default-dockercfg-mszg9\"" Apr 23 17:58:15.190046 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.189935 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kube-system\"/\"konnectivity-ca-bundle\"" Apr 23 17:58:15.190046 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.189984 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.190046 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.189918 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"konnectivity-agent\"" Apr 23 17:58:15.190207 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.190149 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lg6b8" Apr 23 17:58:15.191808 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.191524 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nh2kn" Apr 23 17:58:15.191808 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:15.191619 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nh2kn" podUID="d9157db1-0537-4915-a273-5b7a482bc173" Apr 23 17:58:15.192138 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.192119 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-jl6x2\"" Apr 23 17:58:15.192486 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.192466 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Apr 23 17:58:15.193072 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.192818 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Apr 23 17:58:15.193527 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.193507 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-l2cdq\"" Apr 23 17:58:15.193618 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.193574 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Apr 23 17:58:15.193738 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.193717 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Apr 23 17:58:15.193809 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.193771 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Apr 23 17:58:15.193878 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.193512 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Apr 23 17:58:15.194568 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.194539 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-vdzr8" Apr 23 17:58:15.194877 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.194682 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.196212 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.195978 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.196933 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.196914 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Apr 23 17:58:15.197249 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.197229 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Apr 23 17:58:15.198034 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.197807 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Apr 23 17:58:15.198113 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.197845 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-dockercfg-qg2c5\"" Apr 23 17:58:15.199183 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.199017 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Apr 23 17:58:15.199183 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.199034 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Apr 23 17:58:15.199183 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.199147 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mggbx" Apr 23 17:58:15.199374 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.199204 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Apr 23 17:58:15.199374 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.199016 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-sqdmk\"" Apr 23 17:58:15.200930 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.200912 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Apr 23 17:58:15.201137 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.201121 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Apr 23 17:58:15.201297 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.201281 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Apr 23 17:58:15.201366 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.201301 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"openshift-service-ca.crt\"" Apr 23 17:58:15.202135 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.201544 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"tuned-dockercfg-bvtlq\"" Apr 23 17:58:15.202559 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.202302 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-b6xsg" Apr 23 17:58:15.203587 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.203314 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-28r6m\"" Apr 23 17:58:15.203587 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.203394 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Apr 23 17:58:15.204390 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.204371 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"kube-root-ca.crt\"" Apr 23 17:58:15.205134 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.204931 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Apr 23 17:58:15.205802 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.205467 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"node-exporter-accelerators-collector-config\"" Apr 23 17:58:15.205802 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.205573 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kube-root-ca.crt\"" Apr 23 17:58:15.205802 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.205610 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-dockercfg-l9gtc\"" Apr 23 17:58:15.205802 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.205618 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"metrics-client-ca\"" Apr 23 17:58:15.205802 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.205671 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-kube-rbac-proxy-config\"" Apr 23 17:58:15.205802 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.205804 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"openshift-service-ca.crt\"" Apr 23 17:58:15.206398 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.206380 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-tls\"" Apr 23 17:58:15.207263 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.207216 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-sb4f6" Apr 23 17:58:15.207390 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.207373 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-9d9xv" Apr 23 17:58:15.208719 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.208703 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-dpfbr" Apr 23 17:58:15.208899 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:15.208874 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-dpfbr" podUID="17e9a772-9316-4c67-bffe-e44ea2915f0f" Apr 23 17:58:15.208979 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.208775 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-host-cni-netd\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.208979 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.208952 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ba95391c-a044-45b6-b86c-e5c745e4e7d1-ovn-node-metrics-cert\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.209088 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.208983 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfwxj\" (UniqueName: \"kubernetes.io/projected/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-kube-api-access-sfwxj\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.209088 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209030 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8f18ab0b-c24e-4d53-9d15-941a178305d9-cni-binary-copy\") pod \"multus-additional-cni-plugins-lg6b8\" (UID: \"8f18ab0b-c24e-4d53-9d15-941a178305d9\") " pod="openshift-multus/multus-additional-cni-plugins-lg6b8" Apr 23 17:58:15.209088 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209057 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/6985296e-1df6-4584-8a29-5fb68230893f-etc-sysconfig\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.209236 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209090 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/6985296e-1df6-4584-8a29-5fb68230893f-etc-sysctl-conf\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.209236 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209115 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6985296e-1df6-4584-8a29-5fb68230893f-sys\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.209236 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209139 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6985296e-1df6-4584-8a29-5fb68230893f-lib-modules\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.209236 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209180 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-run-ovn\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.209236 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209210 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4a6da2ea-0b58-4e0b-957b-258095c2f013-hosts-file\") pod \"node-resolver-mggbx\" (UID: \"4a6da2ea-0b58-4e0b-957b-258095c2f013\") " pod="openshift-dns/node-resolver-mggbx" Apr 23 17:58:15.209236 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209235 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4a6da2ea-0b58-4e0b-957b-258095c2f013-tmp-dir\") pod \"node-resolver-mggbx\" (UID: \"4a6da2ea-0b58-4e0b-957b-258095c2f013\") " pod="openshift-dns/node-resolver-mggbx" Apr 23 17:58:15.209510 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209276 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/037e05f6-1827-4968-abeb-530665aa07ab-agent-certs\") pod \"konnectivity-agent-4vkdl\" (UID: \"037e05f6-1827-4968-abeb-530665aa07ab\") " pod="kube-system/konnectivity-agent-4vkdl" Apr 23 17:58:15.209510 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209305 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knj8w\" (UniqueName: \"kubernetes.io/projected/6985296e-1df6-4584-8a29-5fb68230893f-kube-api-access-knj8w\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.209510 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209378 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ba95391c-a044-45b6-b86c-e5c745e4e7d1-ovnkube-config\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.209510 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209383 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-metrics-serving-cert\"" Apr 23 17:58:15.209510 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209403 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-host-run-k8s-cni-cncf-io\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.209510 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209427 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-hostroot\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.209510 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209449 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-host-run-multus-certs\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.209510 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209473 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9571f146-c9fe-45ac-b2a7-1f4153d46c32-sys\") pod \"node-exporter-b6xsg\" (UID: \"9571f146-c9fe-45ac-b2a7-1f4153d46c32\") " pod="openshift-monitoring/node-exporter-b6xsg" Apr 23 17:58:15.209510 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209497 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/9571f146-c9fe-45ac-b2a7-1f4153d46c32-node-exporter-tls\") pod \"node-exporter-b6xsg\" (UID: \"9571f146-c9fe-45ac-b2a7-1f4153d46c32\") " pod="openshift-monitoring/node-exporter-b6xsg" Apr 23 17:58:15.209924 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209521 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-node-log\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.209924 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209568 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z2wd\" (UniqueName: \"kubernetes.io/projected/4a6da2ea-0b58-4e0b-957b-258095c2f013-kube-api-access-4z2wd\") pod \"node-resolver-mggbx\" (UID: \"4a6da2ea-0b58-4e0b-957b-258095c2f013\") " pod="openshift-dns/node-resolver-mggbx" Apr 23 17:58:15.209924 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209591 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-cni-binary-copy\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.209924 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209613 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6985296e-1df6-4584-8a29-5fb68230893f-etc-kubernetes\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.209924 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209632 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/9571f146-c9fe-45ac-b2a7-1f4153d46c32-node-exporter-accelerators-collector-config\") pod \"node-exporter-b6xsg\" (UID: \"9571f146-c9fe-45ac-b2a7-1f4153d46c32\") " pod="openshift-monitoring/node-exporter-b6xsg" Apr 23 17:58:15.209924 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209647 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-systemd-units\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.209924 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209664 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-host-cni-bin\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.209924 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209682 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Apr 23 17:58:15.209924 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209699 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-multus-daemon-config\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.209924 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209716 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc4fq\" (UniqueName: \"kubernetes.io/projected/8f18ab0b-c24e-4d53-9d15-941a178305d9-kube-api-access-fc4fq\") pod \"multus-additional-cni-plugins-lg6b8\" (UID: \"8f18ab0b-c24e-4d53-9d15-941a178305d9\") " pod="openshift-multus/multus-additional-cni-plugins-lg6b8" Apr 23 17:58:15.209924 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209749 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/9571f146-c9fe-45ac-b2a7-1f4153d46c32-root\") pod \"node-exporter-b6xsg\" (UID: \"9571f146-c9fe-45ac-b2a7-1f4153d46c32\") " pod="openshift-monitoring/node-exporter-b6xsg" Apr 23 17:58:15.209924 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209786 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-host-var-lib-kubelet\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.209924 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209809 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/8f18ab0b-c24e-4d53-9d15-941a178305d9-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-lg6b8\" (UID: \"8f18ab0b-c24e-4d53-9d15-941a178305d9\") " pod="openshift-multus/multus-additional-cni-plugins-lg6b8" Apr 23 17:58:15.209924 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209832 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nqs4\" (UniqueName: \"kubernetes.io/projected/41511472-f1af-4c98-ab11-9729dc21519e-kube-api-access-2nqs4\") pod \"iptables-alerter-vdzr8\" (UID: \"41511472-f1af-4c98-ab11-9729dc21519e\") " pod="openshift-network-operator/iptables-alerter-vdzr8" Apr 23 17:58:15.209924 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209848 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"openshift-service-ca.crt\"" Apr 23 17:58:15.209924 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209852 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckhmr\" (UniqueName: \"kubernetes.io/projected/9571f146-c9fe-45ac-b2a7-1f4153d46c32-kube-api-access-ckhmr\") pod \"node-exporter-b6xsg\" (UID: \"9571f146-c9fe-45ac-b2a7-1f4153d46c32\") " pod="openshift-monitoring/node-exporter-b6xsg" Apr 23 17:58:15.209924 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209876 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-host-run-ovn-kubernetes\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.210637 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209912 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4z94\" (UniqueName: \"kubernetes.io/projected/ba95391c-a044-45b6-b86c-e5c745e4e7d1-kube-api-access-d4z94\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.210637 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209944 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-multus-cni-dir\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.210637 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.209992 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-host-var-lib-cni-multus\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.210637 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.210020 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-etc-kubernetes\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.210637 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.210023 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-sa-dockercfg-8s92q\"" Apr 23 17:58:15.210637 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.210053 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/9571f146-c9fe-45ac-b2a7-1f4153d46c32-node-exporter-wtmp\") pod \"node-exporter-b6xsg\" (UID: \"9571f146-c9fe-45ac-b2a7-1f4153d46c32\") " pod="openshift-monitoring/node-exporter-b6xsg" Apr 23 17:58:15.210637 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.210096 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-multus-socket-dir-parent\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.210637 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.210175 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8f18ab0b-c24e-4d53-9d15-941a178305d9-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lg6b8\" (UID: \"8f18ab0b-c24e-4d53-9d15-941a178305d9\") " pod="openshift-multus/multus-additional-cni-plugins-lg6b8" Apr 23 17:58:15.210637 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.210210 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9571f146-c9fe-45ac-b2a7-1f4153d46c32-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-b6xsg\" (UID: \"9571f146-c9fe-45ac-b2a7-1f4153d46c32\") " pod="openshift-monitoring/node-exporter-b6xsg" Apr 23 17:58:15.210637 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.210238 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-run-systemd\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.210637 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.210262 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ba95391c-a044-45b6-b86c-e5c745e4e7d1-ovnkube-script-lib\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.210637 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.210310 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-os-release\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.210637 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.210360 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8f18ab0b-c24e-4d53-9d15-941a178305d9-cnibin\") pod \"multus-additional-cni-plugins-lg6b8\" (UID: \"8f18ab0b-c24e-4d53-9d15-941a178305d9\") " pod="openshift-multus/multus-additional-cni-plugins-lg6b8" Apr 23 17:58:15.210637 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.210386 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/41511472-f1af-4c98-ab11-9729dc21519e-iptables-alerter-script\") pod \"iptables-alerter-vdzr8\" (UID: \"41511472-f1af-4c98-ab11-9729dc21519e\") " pod="openshift-network-operator/iptables-alerter-vdzr8" Apr 23 17:58:15.211257 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.210650 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Apr 23 17:58:15.211257 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.210742 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9157db1-0537-4915-a273-5b7a482bc173-metrics-certs\") pod \"network-metrics-daemon-nh2kn\" (UID: \"d9157db1-0537-4915-a273-5b7a482bc173\") " pod="openshift-multus/network-metrics-daemon-nh2kn" Apr 23 17:58:15.211257 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.210792 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-host-run-netns\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.211257 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.210817 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/41511472-f1af-4c98-ab11-9729dc21519e-host-slash\") pod \"iptables-alerter-vdzr8\" (UID: \"41511472-f1af-4c98-ab11-9729dc21519e\") " pod="openshift-network-operator/iptables-alerter-vdzr8" Apr 23 17:58:15.211257 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.210840 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9571f146-c9fe-45ac-b2a7-1f4153d46c32-metrics-client-ca\") pod \"node-exporter-b6xsg\" (UID: \"9571f146-c9fe-45ac-b2a7-1f4153d46c32\") " pod="openshift-monitoring/node-exporter-b6xsg" Apr 23 17:58:15.211257 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.210869 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-log-socket\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.211257 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.210892 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-host-var-lib-cni-bin\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.211257 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.210896 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"kube-root-ca.crt\"" Apr 23 17:58:15.211257 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.210906 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Apr 23 17:58:15.211257 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.210916 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-multus-conf-dir\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.211257 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.210947 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6985296e-1df6-4584-8a29-5fb68230893f-tmp\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.211257 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.210897 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-p7l7k\"" Apr 23 17:58:15.211257 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.210978 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-host-run-netns\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.211257 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.211009 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-etc-openvswitch\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.211257 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.211032 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ba95391c-a044-45b6-b86c-e5c745e4e7d1-env-overrides\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.211257 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.211055 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6985296e-1df6-4584-8a29-5fb68230893f-host\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.211257 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.211076 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/6985296e-1df6-4584-8a29-5fb68230893f-etc-tuned\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.211257 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.211111 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8f18ab0b-c24e-4d53-9d15-941a178305d9-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lg6b8\" (UID: \"8f18ab0b-c24e-4d53-9d15-941a178305d9\") " pod="openshift-multus/multus-additional-cni-plugins-lg6b8" Apr 23 17:58:15.211257 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.211142 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.212123 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.211186 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6985296e-1df6-4584-8a29-5fb68230893f-run\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.212123 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.211226 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6985296e-1df6-4584-8a29-5fb68230893f-var-lib-kubelet\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.212123 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.211260 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-host-kubelet\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.212123 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.211350 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-host-slash\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.212123 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.211375 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-var-lib-openvswitch\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.212123 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.211398 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-run-openvswitch\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.212123 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.211420 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/6985296e-1df6-4584-8a29-5fb68230893f-etc-modprobe-d\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.212123 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.211442 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/6985296e-1df6-4584-8a29-5fb68230893f-etc-sysctl-d\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.212123 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.211465 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/6985296e-1df6-4584-8a29-5fb68230893f-etc-systemd\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.212123 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.211488 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/9571f146-c9fe-45ac-b2a7-1f4153d46c32-node-exporter-textfile\") pod \"node-exporter-b6xsg\" (UID: \"9571f146-c9fe-45ac-b2a7-1f4153d46c32\") " pod="openshift-monitoring/node-exporter-b6xsg" Apr 23 17:58:15.212123 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.211511 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9k5h\" (UniqueName: \"kubernetes.io/projected/d9157db1-0537-4915-a273-5b7a482bc173-kube-api-access-f9k5h\") pod \"network-metrics-daemon-nh2kn\" (UID: \"d9157db1-0537-4915-a273-5b7a482bc173\") " pod="openshift-multus/network-metrics-daemon-nh2kn" Apr 23 17:58:15.212123 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.211532 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/037e05f6-1827-4968-abeb-530665aa07ab-konnectivity-ca\") pod \"konnectivity-agent-4vkdl\" (UID: \"037e05f6-1827-4968-abeb-530665aa07ab\") " pod="kube-system/konnectivity-agent-4vkdl" Apr 23 17:58:15.212123 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.211554 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-system-cni-dir\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.212123 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.211577 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-cnibin\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.212123 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.211601 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8f18ab0b-c24e-4d53-9d15-941a178305d9-system-cni-dir\") pod \"multus-additional-cni-plugins-lg6b8\" (UID: \"8f18ab0b-c24e-4d53-9d15-941a178305d9\") " pod="openshift-multus/multus-additional-cni-plugins-lg6b8" Apr 23 17:58:15.212123 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.211622 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8f18ab0b-c24e-4d53-9d15-941a178305d9-os-release\") pod \"multus-additional-cni-plugins-lg6b8\" (UID: \"8f18ab0b-c24e-4d53-9d15-941a178305d9\") " pod="openshift-multus/multus-additional-cni-plugins-lg6b8" Apr 23 17:58:15.240960 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.240928 2572 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-22 17:53:14 +0000 UTC" deadline="2028-01-15 19:05:53.865639778 +0000 UTC" Apr 23 17:58:15.240960 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.240959 2572 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="15169h7m38.624684375s" Apr 23 17:58:15.300773 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.300750 2572 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 23 17:58:15.312716 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.312690 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/6985296e-1df6-4584-8a29-5fb68230893f-etc-sysctl-d\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.312716 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.312722 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/6985296e-1df6-4584-8a29-5fb68230893f-etc-systemd\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.312918 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.312743 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/9571f146-c9fe-45ac-b2a7-1f4153d46c32-node-exporter-textfile\") pod \"node-exporter-b6xsg\" (UID: \"9571f146-c9fe-45ac-b2a7-1f4153d46c32\") " pod="openshift-monitoring/node-exporter-b6xsg" Apr 23 17:58:15.312918 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.312768 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f9k5h\" (UniqueName: \"kubernetes.io/projected/d9157db1-0537-4915-a273-5b7a482bc173-kube-api-access-f9k5h\") pod \"network-metrics-daemon-nh2kn\" (UID: \"d9157db1-0537-4915-a273-5b7a482bc173\") " pod="openshift-multus/network-metrics-daemon-nh2kn" Apr 23 17:58:15.312918 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.312790 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/037e05f6-1827-4968-abeb-530665aa07ab-konnectivity-ca\") pod \"konnectivity-agent-4vkdl\" (UID: \"037e05f6-1827-4968-abeb-530665aa07ab\") " pod="kube-system/konnectivity-agent-4vkdl" Apr 23 17:58:15.312918 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.312809 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-system-cni-dir\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.312918 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.312828 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-cnibin\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.312918 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.312834 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/6985296e-1df6-4584-8a29-5fb68230893f-etc-systemd\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.312918 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.312851 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8f18ab0b-c24e-4d53-9d15-941a178305d9-system-cni-dir\") pod \"multus-additional-cni-plugins-lg6b8\" (UID: \"8f18ab0b-c24e-4d53-9d15-941a178305d9\") " pod="openshift-multus/multus-additional-cni-plugins-lg6b8" Apr 23 17:58:15.312918 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.312875 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/6985296e-1df6-4584-8a29-5fb68230893f-etc-sysctl-d\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.312918 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.312896 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8f18ab0b-c24e-4d53-9d15-941a178305d9-os-release\") pod \"multus-additional-cni-plugins-lg6b8\" (UID: \"8f18ab0b-c24e-4d53-9d15-941a178305d9\") " pod="openshift-multus/multus-additional-cni-plugins-lg6b8" Apr 23 17:58:15.312918 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.312908 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-system-cni-dir\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.313411 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.312918 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8f18ab0b-c24e-4d53-9d15-941a178305d9-system-cni-dir\") pod \"multus-additional-cni-plugins-lg6b8\" (UID: \"8f18ab0b-c24e-4d53-9d15-941a178305d9\") " pod="openshift-multus/multus-additional-cni-plugins-lg6b8" Apr 23 17:58:15.313411 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.312929 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-host-cni-netd\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.313411 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.312963 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-host-cni-netd\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.313411 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.312968 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8f18ab0b-c24e-4d53-9d15-941a178305d9-os-release\") pod \"multus-additional-cni-plugins-lg6b8\" (UID: \"8f18ab0b-c24e-4d53-9d15-941a178305d9\") " pod="openshift-multus/multus-additional-cni-plugins-lg6b8" Apr 23 17:58:15.313411 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.312990 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ba95391c-a044-45b6-b86c-e5c745e4e7d1-ovn-node-metrics-cert\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.313411 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.312992 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-cnibin\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.313411 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313014 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sfwxj\" (UniqueName: \"kubernetes.io/projected/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-kube-api-access-sfwxj\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.313411 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313037 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8f18ab0b-c24e-4d53-9d15-941a178305d9-cni-binary-copy\") pod \"multus-additional-cni-plugins-lg6b8\" (UID: \"8f18ab0b-c24e-4d53-9d15-941a178305d9\") " pod="openshift-multus/multus-additional-cni-plugins-lg6b8" Apr 23 17:58:15.313411 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313064 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/6c2d8786-f36a-4e54-a020-66da9e674ee1-sys-fs\") pod \"aws-ebs-csi-driver-node-sb4f6\" (UID: \"6c2d8786-f36a-4e54-a020-66da9e674ee1\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-sb4f6" Apr 23 17:58:15.313411 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313101 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/465ff8c4-e8a9-4cb7-8353-e5f7d5a8b986-host\") pod \"node-ca-9d9xv\" (UID: \"465ff8c4-e8a9-4cb7-8353-e5f7d5a8b986\") " pod="openshift-image-registry/node-ca-9d9xv" Apr 23 17:58:15.313411 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313121 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/465ff8c4-e8a9-4cb7-8353-e5f7d5a8b986-serviceca\") pod \"node-ca-9d9xv\" (UID: \"465ff8c4-e8a9-4cb7-8353-e5f7d5a8b986\") " pod="openshift-image-registry/node-ca-9d9xv" Apr 23 17:58:15.313411 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313178 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/6985296e-1df6-4584-8a29-5fb68230893f-etc-sysconfig\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.313411 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313207 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/6985296e-1df6-4584-8a29-5fb68230893f-etc-sysctl-conf\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.313411 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313231 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6985296e-1df6-4584-8a29-5fb68230893f-sys\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.313411 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313257 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6985296e-1df6-4584-8a29-5fb68230893f-lib-modules\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.313411 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313301 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-run-ovn\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.313411 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313334 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/6985296e-1df6-4584-8a29-5fb68230893f-etc-sysconfig\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.313411 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313376 2572 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Apr 23 17:58:15.314212 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313388 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-run-ovn\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.314212 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313419 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4a6da2ea-0b58-4e0b-957b-258095c2f013-hosts-file\") pod \"node-resolver-mggbx\" (UID: \"4a6da2ea-0b58-4e0b-957b-258095c2f013\") " pod="openshift-dns/node-resolver-mggbx" Apr 23 17:58:15.314212 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313443 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4a6da2ea-0b58-4e0b-957b-258095c2f013-tmp-dir\") pod \"node-resolver-mggbx\" (UID: \"4a6da2ea-0b58-4e0b-957b-258095c2f013\") " pod="openshift-dns/node-resolver-mggbx" Apr 23 17:58:15.314212 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313465 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6985296e-1df6-4584-8a29-5fb68230893f-lib-modules\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.314212 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313488 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/037e05f6-1827-4968-abeb-530665aa07ab-konnectivity-ca\") pod \"konnectivity-agent-4vkdl\" (UID: \"037e05f6-1827-4968-abeb-530665aa07ab\") " pod="kube-system/konnectivity-agent-4vkdl" Apr 23 17:58:15.314212 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313580 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/6985296e-1df6-4584-8a29-5fb68230893f-etc-sysctl-conf\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.314212 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313492 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/037e05f6-1827-4968-abeb-530665aa07ab-agent-certs\") pod \"konnectivity-agent-4vkdl\" (UID: \"037e05f6-1827-4968-abeb-530665aa07ab\") " pod="kube-system/konnectivity-agent-4vkdl" Apr 23 17:58:15.314212 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313650 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-knj8w\" (UniqueName: \"kubernetes.io/projected/6985296e-1df6-4584-8a29-5fb68230893f-kube-api-access-knj8w\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.314212 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313668 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8f18ab0b-c24e-4d53-9d15-941a178305d9-cni-binary-copy\") pod \"multus-additional-cni-plugins-lg6b8\" (UID: \"8f18ab0b-c24e-4d53-9d15-941a178305d9\") " pod="openshift-multus/multus-additional-cni-plugins-lg6b8" Apr 23 17:58:15.314212 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313685 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ba95391c-a044-45b6-b86c-e5c745e4e7d1-ovnkube-config\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.314212 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313647 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4a6da2ea-0b58-4e0b-957b-258095c2f013-hosts-file\") pod \"node-resolver-mggbx\" (UID: \"4a6da2ea-0b58-4e0b-957b-258095c2f013\") " pod="openshift-dns/node-resolver-mggbx" Apr 23 17:58:15.314212 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313720 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-host-run-k8s-cni-cncf-io\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.314212 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313745 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-hostroot\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.314212 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313764 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4a6da2ea-0b58-4e0b-957b-258095c2f013-tmp-dir\") pod \"node-resolver-mggbx\" (UID: \"4a6da2ea-0b58-4e0b-957b-258095c2f013\") " pod="openshift-dns/node-resolver-mggbx" Apr 23 17:58:15.314212 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313770 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-host-run-multus-certs\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.314212 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313836 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6c2d8786-f36a-4e54-a020-66da9e674ee1-socket-dir\") pod \"aws-ebs-csi-driver-node-sb4f6\" (UID: \"6c2d8786-f36a-4e54-a020-66da9e674ee1\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-sb4f6" Apr 23 17:58:15.314212 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313841 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-host-run-k8s-cni-cncf-io\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.314212 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313863 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9571f146-c9fe-45ac-b2a7-1f4153d46c32-sys\") pod \"node-exporter-b6xsg\" (UID: \"9571f146-c9fe-45ac-b2a7-1f4153d46c32\") " pod="openshift-monitoring/node-exporter-b6xsg" Apr 23 17:58:15.315005 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313890 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/9571f146-c9fe-45ac-b2a7-1f4153d46c32-node-exporter-tls\") pod \"node-exporter-b6xsg\" (UID: \"9571f146-c9fe-45ac-b2a7-1f4153d46c32\") " pod="openshift-monitoring/node-exporter-b6xsg" Apr 23 17:58:15.315005 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313920 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-node-log\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.315005 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313922 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9571f146-c9fe-45ac-b2a7-1f4153d46c32-sys\") pod \"node-exporter-b6xsg\" (UID: \"9571f146-c9fe-45ac-b2a7-1f4153d46c32\") " pod="openshift-monitoring/node-exporter-b6xsg" Apr 23 17:58:15.315005 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313934 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-hostroot\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.315005 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313945 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4z2wd\" (UniqueName: \"kubernetes.io/projected/4a6da2ea-0b58-4e0b-957b-258095c2f013-kube-api-access-4z2wd\") pod \"node-resolver-mggbx\" (UID: \"4a6da2ea-0b58-4e0b-957b-258095c2f013\") " pod="openshift-dns/node-resolver-mggbx" Apr 23 17:58:15.315005 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313972 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-cni-binary-copy\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.315005 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313998 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6985296e-1df6-4584-8a29-5fb68230893f-etc-kubernetes\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.315005 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.314023 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/9571f146-c9fe-45ac-b2a7-1f4153d46c32-node-exporter-accelerators-collector-config\") pod \"node-exporter-b6xsg\" (UID: \"9571f146-c9fe-45ac-b2a7-1f4153d46c32\") " pod="openshift-monitoring/node-exporter-b6xsg" Apr 23 17:58:15.315005 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313387 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6985296e-1df6-4584-8a29-5fb68230893f-sys\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.315005 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.314047 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-systemd-units\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.315005 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.314086 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-host-cni-bin\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.315005 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.313883 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-host-run-multus-certs\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.315005 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.314111 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/9571f146-c9fe-45ac-b2a7-1f4153d46c32-node-exporter-textfile\") pod \"node-exporter-b6xsg\" (UID: \"9571f146-c9fe-45ac-b2a7-1f4153d46c32\") " pod="openshift-monitoring/node-exporter-b6xsg" Apr 23 17:58:15.315005 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.314090 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-systemd-units\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.315005 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.314140 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-multus-daemon-config\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.315005 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.314164 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-host-cni-bin\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.315005 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.314170 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fc4fq\" (UniqueName: \"kubernetes.io/projected/8f18ab0b-c24e-4d53-9d15-941a178305d9-kube-api-access-fc4fq\") pod \"multus-additional-cni-plugins-lg6b8\" (UID: \"8f18ab0b-c24e-4d53-9d15-941a178305d9\") " pod="openshift-multus/multus-additional-cni-plugins-lg6b8" Apr 23 17:58:15.315005 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.314178 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-node-log\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.315908 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.314197 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6985296e-1df6-4584-8a29-5fb68230893f-etc-kubernetes\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.315908 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.314224 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/9571f146-c9fe-45ac-b2a7-1f4153d46c32-root\") pod \"node-exporter-b6xsg\" (UID: \"9571f146-c9fe-45ac-b2a7-1f4153d46c32\") " pod="openshift-monitoring/node-exporter-b6xsg" Apr 23 17:58:15.315908 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.314253 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-host-var-lib-kubelet\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.315908 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.314294 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/8f18ab0b-c24e-4d53-9d15-941a178305d9-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-lg6b8\" (UID: \"8f18ab0b-c24e-4d53-9d15-941a178305d9\") " pod="openshift-multus/multus-additional-cni-plugins-lg6b8" Apr 23 17:58:15.315908 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.314341 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2nqs4\" (UniqueName: \"kubernetes.io/projected/41511472-f1af-4c98-ab11-9729dc21519e-kube-api-access-2nqs4\") pod \"iptables-alerter-vdzr8\" (UID: \"41511472-f1af-4c98-ab11-9729dc21519e\") " pod="openshift-network-operator/iptables-alerter-vdzr8" Apr 23 17:58:15.315908 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.314369 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ckhmr\" (UniqueName: \"kubernetes.io/projected/9571f146-c9fe-45ac-b2a7-1f4153d46c32-kube-api-access-ckhmr\") pod \"node-exporter-b6xsg\" (UID: \"9571f146-c9fe-45ac-b2a7-1f4153d46c32\") " pod="openshift-monitoring/node-exporter-b6xsg" Apr 23 17:58:15.315908 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.314399 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-host-run-ovn-kubernetes\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.315908 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.314467 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d4z94\" (UniqueName: \"kubernetes.io/projected/ba95391c-a044-45b6-b86c-e5c745e4e7d1-kube-api-access-d4z94\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.315908 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.314531 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/9571f146-c9fe-45ac-b2a7-1f4153d46c32-root\") pod \"node-exporter-b6xsg\" (UID: \"9571f146-c9fe-45ac-b2a7-1f4153d46c32\") " pod="openshift-monitoring/node-exporter-b6xsg" Apr 23 17:58:15.315908 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.314744 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ba95391c-a044-45b6-b86c-e5c745e4e7d1-ovnkube-config\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.315908 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.314758 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-host-run-ovn-kubernetes\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.315908 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.315097 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/9571f146-c9fe-45ac-b2a7-1f4153d46c32-node-exporter-accelerators-collector-config\") pod \"node-exporter-b6xsg\" (UID: \"9571f146-c9fe-45ac-b2a7-1f4153d46c32\") " pod="openshift-monitoring/node-exporter-b6xsg" Apr 23 17:58:15.315908 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.315476 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-cni-binary-copy\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.315908 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.315535 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-multus-daemon-config\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.315908 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.315549 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-host-var-lib-kubelet\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.315908 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.315614 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-multus-cni-dir\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.315908 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.315767 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-multus-cni-dir\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.316566 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.315810 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-host-var-lib-cni-multus\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.316566 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.315840 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-etc-kubernetes\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.316566 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.315868 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/9571f146-c9fe-45ac-b2a7-1f4153d46c32-node-exporter-wtmp\") pod \"node-exporter-b6xsg\" (UID: \"9571f146-c9fe-45ac-b2a7-1f4153d46c32\") " pod="openshift-monitoring/node-exporter-b6xsg" Apr 23 17:58:15.316566 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.315897 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-multus-socket-dir-parent\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.316566 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.315928 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-host-var-lib-cni-multus\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.316566 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.315929 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-etc-kubernetes\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.316566 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316006 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-multus-socket-dir-parent\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.316566 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316016 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/9571f146-c9fe-45ac-b2a7-1f4153d46c32-node-exporter-wtmp\") pod \"node-exporter-b6xsg\" (UID: \"9571f146-c9fe-45ac-b2a7-1f4153d46c32\") " pod="openshift-monitoring/node-exporter-b6xsg" Apr 23 17:58:15.316566 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.315930 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8f18ab0b-c24e-4d53-9d15-941a178305d9-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lg6b8\" (UID: \"8f18ab0b-c24e-4d53-9d15-941a178305d9\") " pod="openshift-multus/multus-additional-cni-plugins-lg6b8" Apr 23 17:58:15.316566 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316068 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9571f146-c9fe-45ac-b2a7-1f4153d46c32-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-b6xsg\" (UID: \"9571f146-c9fe-45ac-b2a7-1f4153d46c32\") " pod="openshift-monitoring/node-exporter-b6xsg" Apr 23 17:58:15.316566 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316098 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-run-systemd\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.316566 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316124 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ba95391c-a044-45b6-b86c-e5c745e4e7d1-ovnkube-script-lib\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.316566 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316149 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-os-release\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.316566 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316173 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8f18ab0b-c24e-4d53-9d15-941a178305d9-cnibin\") pod \"multus-additional-cni-plugins-lg6b8\" (UID: \"8f18ab0b-c24e-4d53-9d15-941a178305d9\") " pod="openshift-multus/multus-additional-cni-plugins-lg6b8" Apr 23 17:58:15.316566 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316201 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/6c2d8786-f36a-4e54-a020-66da9e674ee1-device-dir\") pod \"aws-ebs-csi-driver-node-sb4f6\" (UID: \"6c2d8786-f36a-4e54-a020-66da9e674ee1\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-sb4f6" Apr 23 17:58:15.316566 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316226 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdtc2\" (UniqueName: \"kubernetes.io/projected/6c2d8786-f36a-4e54-a020-66da9e674ee1-kube-api-access-gdtc2\") pod \"aws-ebs-csi-driver-node-sb4f6\" (UID: \"6c2d8786-f36a-4e54-a020-66da9e674ee1\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-sb4f6" Apr 23 17:58:15.316566 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316255 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/41511472-f1af-4c98-ab11-9729dc21519e-iptables-alerter-script\") pod \"iptables-alerter-vdzr8\" (UID: \"41511472-f1af-4c98-ab11-9729dc21519e\") " pod="openshift-network-operator/iptables-alerter-vdzr8" Apr 23 17:58:15.317418 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316279 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9157db1-0537-4915-a273-5b7a482bc173-metrics-certs\") pod \"network-metrics-daemon-nh2kn\" (UID: \"d9157db1-0537-4915-a273-5b7a482bc173\") " pod="openshift-multus/network-metrics-daemon-nh2kn" Apr 23 17:58:15.317418 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316302 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-host-run-netns\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.317418 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316343 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/41511472-f1af-4c98-ab11-9729dc21519e-host-slash\") pod \"iptables-alerter-vdzr8\" (UID: \"41511472-f1af-4c98-ab11-9729dc21519e\") " pod="openshift-network-operator/iptables-alerter-vdzr8" Apr 23 17:58:15.317418 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316368 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9571f146-c9fe-45ac-b2a7-1f4153d46c32-metrics-client-ca\") pod \"node-exporter-b6xsg\" (UID: \"9571f146-c9fe-45ac-b2a7-1f4153d46c32\") " pod="openshift-monitoring/node-exporter-b6xsg" Apr 23 17:58:15.317418 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316391 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-log-socket\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.317418 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316397 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/8f18ab0b-c24e-4d53-9d15-941a178305d9-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-lg6b8\" (UID: \"8f18ab0b-c24e-4d53-9d15-941a178305d9\") " pod="openshift-multus/multus-additional-cni-plugins-lg6b8" Apr 23 17:58:15.317418 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316431 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-host-var-lib-cni-bin\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.317418 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316481 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-multus-conf-dir\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.317418 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316489 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-log-socket\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.317418 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316511 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6c2d8786-f36a-4e54-a020-66da9e674ee1-registration-dir\") pod \"aws-ebs-csi-driver-node-sb4f6\" (UID: \"6c2d8786-f36a-4e54-a020-66da9e674ee1\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-sb4f6" Apr 23 17:58:15.317418 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316518 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-host-var-lib-cni-bin\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.317418 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316546 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/6c2d8786-f36a-4e54-a020-66da9e674ee1-etc-selinux\") pod \"aws-ebs-csi-driver-node-sb4f6\" (UID: \"6c2d8786-f36a-4e54-a020-66da9e674ee1\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-sb4f6" Apr 23 17:58:15.317418 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316572 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmmx9\" (UniqueName: \"kubernetes.io/projected/17e9a772-9316-4c67-bffe-e44ea2915f0f-kube-api-access-tmmx9\") pod \"network-check-target-dpfbr\" (UID: \"17e9a772-9316-4c67-bffe-e44ea2915f0f\") " pod="openshift-network-diagnostics/network-check-target-dpfbr" Apr 23 17:58:15.317418 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316601 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-multus-conf-dir\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.317418 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316604 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6985296e-1df6-4584-8a29-5fb68230893f-tmp\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.317418 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316645 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-os-release\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.317418 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316650 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-host-run-netns\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.318213 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316434 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8f18ab0b-c24e-4d53-9d15-941a178305d9-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lg6b8\" (UID: \"8f18ab0b-c24e-4d53-9d15-941a178305d9\") " pod="openshift-multus/multus-additional-cni-plugins-lg6b8" Apr 23 17:58:15.318213 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316708 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-etc-openvswitch\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.318213 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316734 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ba95391c-a044-45b6-b86c-e5c745e4e7d1-env-overrides\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.318213 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316761 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6985296e-1df6-4584-8a29-5fb68230893f-host\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.318213 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316783 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/6985296e-1df6-4584-8a29-5fb68230893f-etc-tuned\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.318213 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316809 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8f18ab0b-c24e-4d53-9d15-941a178305d9-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lg6b8\" (UID: \"8f18ab0b-c24e-4d53-9d15-941a178305d9\") " pod="openshift-multus/multus-additional-cni-plugins-lg6b8" Apr 23 17:58:15.318213 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316839 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c2d8786-f36a-4e54-a020-66da9e674ee1-kubelet-dir\") pod \"aws-ebs-csi-driver-node-sb4f6\" (UID: \"6c2d8786-f36a-4e54-a020-66da9e674ee1\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-sb4f6" Apr 23 17:58:15.318213 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316867 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s6sv\" (UniqueName: \"kubernetes.io/projected/465ff8c4-e8a9-4cb7-8353-e5f7d5a8b986-kube-api-access-7s6sv\") pod \"node-ca-9d9xv\" (UID: \"465ff8c4-e8a9-4cb7-8353-e5f7d5a8b986\") " pod="openshift-image-registry/node-ca-9d9xv" Apr 23 17:58:15.318213 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316895 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.318213 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316993 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6985296e-1df6-4584-8a29-5fb68230893f-run\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.318213 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.316998 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/41511472-f1af-4c98-ab11-9729dc21519e-iptables-alerter-script\") pod \"iptables-alerter-vdzr8\" (UID: \"41511472-f1af-4c98-ab11-9729dc21519e\") " pod="openshift-network-operator/iptables-alerter-vdzr8" Apr 23 17:58:15.318213 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.317021 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6985296e-1df6-4584-8a29-5fb68230893f-var-lib-kubelet\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.318213 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.317041 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.318213 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.317047 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-host-kubelet\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.318213 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.317071 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/41511472-f1af-4c98-ab11-9729dc21519e-host-slash\") pod \"iptables-alerter-vdzr8\" (UID: \"41511472-f1af-4c98-ab11-9729dc21519e\") " pod="openshift-network-operator/iptables-alerter-vdzr8" Apr 23 17:58:15.318213 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.317074 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-host-slash\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.318213 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.317108 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-host-slash\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.318934 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.317116 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-var-lib-openvswitch\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.318934 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.317147 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-run-openvswitch\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.318934 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.317163 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6985296e-1df6-4584-8a29-5fb68230893f-host\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.318934 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.317181 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ba95391c-a044-45b6-b86c-e5c745e4e7d1-ovnkube-script-lib\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.318934 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.317230 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-run-systemd\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.318934 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.317269 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-host-run-netns\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.318934 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.317338 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6985296e-1df6-4584-8a29-5fb68230893f-var-lib-kubelet\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.318934 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.317446 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8f18ab0b-c24e-4d53-9d15-941a178305d9-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lg6b8\" (UID: \"8f18ab0b-c24e-4d53-9d15-941a178305d9\") " pod="openshift-multus/multus-additional-cni-plugins-lg6b8" Apr 23 17:58:15.318934 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.317482 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-etc-openvswitch\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.318934 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.317487 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6985296e-1df6-4584-8a29-5fb68230893f-run\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.318934 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.317518 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-run-openvswitch\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.318934 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.317535 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-host-run-netns\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.318934 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.317539 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ba95391c-a044-45b6-b86c-e5c745e4e7d1-env-overrides\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.318934 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.317560 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-host-kubelet\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.318934 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.317588 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8f18ab0b-c24e-4d53-9d15-941a178305d9-cnibin\") pod \"multus-additional-cni-plugins-lg6b8\" (UID: \"8f18ab0b-c24e-4d53-9d15-941a178305d9\") " pod="openshift-multus/multus-additional-cni-plugins-lg6b8" Apr 23 17:58:15.318934 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.317642 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba95391c-a044-45b6-b86c-e5c745e4e7d1-var-lib-openvswitch\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.318934 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:15.317649 2572 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:58:15.319609 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:15.317720 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9157db1-0537-4915-a273-5b7a482bc173-metrics-certs podName:d9157db1-0537-4915-a273-5b7a482bc173 nodeName:}" failed. No retries permitted until 2026-04-23 17:58:15.817699407 +0000 UTC m=+3.115945539 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d9157db1-0537-4915-a273-5b7a482bc173-metrics-certs") pod "network-metrics-daemon-nh2kn" (UID: "d9157db1-0537-4915-a273-5b7a482bc173") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:58:15.319609 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.317758 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/6985296e-1df6-4584-8a29-5fb68230893f-etc-modprobe-d\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.319609 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.317896 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/6985296e-1df6-4584-8a29-5fb68230893f-etc-modprobe-d\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.319609 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.317954 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9571f146-c9fe-45ac-b2a7-1f4153d46c32-metrics-client-ca\") pod \"node-exporter-b6xsg\" (UID: \"9571f146-c9fe-45ac-b2a7-1f4153d46c32\") " pod="openshift-monitoring/node-exporter-b6xsg" Apr 23 17:58:15.319609 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.318022 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/9571f146-c9fe-45ac-b2a7-1f4153d46c32-node-exporter-tls\") pod \"node-exporter-b6xsg\" (UID: \"9571f146-c9fe-45ac-b2a7-1f4153d46c32\") " pod="openshift-monitoring/node-exporter-b6xsg" Apr 23 17:58:15.319609 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.318401 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ba95391c-a044-45b6-b86c-e5c745e4e7d1-ovn-node-metrics-cert\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.319812 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.319729 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6985296e-1df6-4584-8a29-5fb68230893f-tmp\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.319943 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.319927 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/6985296e-1df6-4584-8a29-5fb68230893f-etc-tuned\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.320085 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.320063 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/037e05f6-1827-4968-abeb-530665aa07ab-agent-certs\") pod \"konnectivity-agent-4vkdl\" (UID: \"037e05f6-1827-4968-abeb-530665aa07ab\") " pod="kube-system/konnectivity-agent-4vkdl" Apr 23 17:58:15.322629 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.322606 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9571f146-c9fe-45ac-b2a7-1f4153d46c32-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-b6xsg\" (UID: \"9571f146-c9fe-45ac-b2a7-1f4153d46c32\") " pod="openshift-monitoring/node-exporter-b6xsg" Apr 23 17:58:15.326105 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.326081 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9k5h\" (UniqueName: \"kubernetes.io/projected/d9157db1-0537-4915-a273-5b7a482bc173-kube-api-access-f9k5h\") pod \"network-metrics-daemon-nh2kn\" (UID: \"d9157db1-0537-4915-a273-5b7a482bc173\") " pod="openshift-multus/network-metrics-daemon-nh2kn" Apr 23 17:58:15.327526 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.327502 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z2wd\" (UniqueName: \"kubernetes.io/projected/4a6da2ea-0b58-4e0b-957b-258095c2f013-kube-api-access-4z2wd\") pod \"node-resolver-mggbx\" (UID: \"4a6da2ea-0b58-4e0b-957b-258095c2f013\") " pod="openshift-dns/node-resolver-mggbx" Apr 23 17:58:15.327975 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.327945 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-knj8w\" (UniqueName: \"kubernetes.io/projected/6985296e-1df6-4584-8a29-5fb68230893f-kube-api-access-knj8w\") pod \"tuned-rf4xp\" (UID: \"6985296e-1df6-4584-8a29-5fb68230893f\") " pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.328886 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.328838 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nqs4\" (UniqueName: \"kubernetes.io/projected/41511472-f1af-4c98-ab11-9729dc21519e-kube-api-access-2nqs4\") pod \"iptables-alerter-vdzr8\" (UID: \"41511472-f1af-4c98-ab11-9729dc21519e\") " pod="openshift-network-operator/iptables-alerter-vdzr8" Apr 23 17:58:15.328978 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.328962 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4z94\" (UniqueName: \"kubernetes.io/projected/ba95391c-a044-45b6-b86c-e5c745e4e7d1-kube-api-access-d4z94\") pod \"ovnkube-node-rz688\" (UID: \"ba95391c-a044-45b6-b86c-e5c745e4e7d1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.329364 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.329313 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc4fq\" (UniqueName: \"kubernetes.io/projected/8f18ab0b-c24e-4d53-9d15-941a178305d9-kube-api-access-fc4fq\") pod \"multus-additional-cni-plugins-lg6b8\" (UID: \"8f18ab0b-c24e-4d53-9d15-941a178305d9\") " pod="openshift-multus/multus-additional-cni-plugins-lg6b8" Apr 23 17:58:15.329830 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.329812 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfwxj\" (UniqueName: \"kubernetes.io/projected/e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a-kube-api-access-sfwxj\") pod \"multus-69j8s\" (UID: \"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a\") " pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.330368 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.330346 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckhmr\" (UniqueName: \"kubernetes.io/projected/9571f146-c9fe-45ac-b2a7-1f4153d46c32-kube-api-access-ckhmr\") pod \"node-exporter-b6xsg\" (UID: \"9571f146-c9fe-45ac-b2a7-1f4153d46c32\") " pod="openshift-monitoring/node-exporter-b6xsg" Apr 23 17:58:15.418230 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.418202 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/6c2d8786-f36a-4e54-a020-66da9e674ee1-sys-fs\") pod \"aws-ebs-csi-driver-node-sb4f6\" (UID: \"6c2d8786-f36a-4e54-a020-66da9e674ee1\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-sb4f6" Apr 23 17:58:15.418230 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.418231 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/465ff8c4-e8a9-4cb7-8353-e5f7d5a8b986-host\") pod \"node-ca-9d9xv\" (UID: \"465ff8c4-e8a9-4cb7-8353-e5f7d5a8b986\") " pod="openshift-image-registry/node-ca-9d9xv" Apr 23 17:58:15.418230 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.418246 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/465ff8c4-e8a9-4cb7-8353-e5f7d5a8b986-serviceca\") pod \"node-ca-9d9xv\" (UID: \"465ff8c4-e8a9-4cb7-8353-e5f7d5a8b986\") " pod="openshift-image-registry/node-ca-9d9xv" Apr 23 17:58:15.418453 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.418356 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/465ff8c4-e8a9-4cb7-8353-e5f7d5a8b986-host\") pod \"node-ca-9d9xv\" (UID: \"465ff8c4-e8a9-4cb7-8353-e5f7d5a8b986\") " pod="openshift-image-registry/node-ca-9d9xv" Apr 23 17:58:15.418453 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.418379 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/6c2d8786-f36a-4e54-a020-66da9e674ee1-sys-fs\") pod \"aws-ebs-csi-driver-node-sb4f6\" (UID: \"6c2d8786-f36a-4e54-a020-66da9e674ee1\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-sb4f6" Apr 23 17:58:15.418453 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.418406 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6c2d8786-f36a-4e54-a020-66da9e674ee1-socket-dir\") pod \"aws-ebs-csi-driver-node-sb4f6\" (UID: \"6c2d8786-f36a-4e54-a020-66da9e674ee1\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-sb4f6" Apr 23 17:58:15.418545 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.418451 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/6c2d8786-f36a-4e54-a020-66da9e674ee1-device-dir\") pod \"aws-ebs-csi-driver-node-sb4f6\" (UID: \"6c2d8786-f36a-4e54-a020-66da9e674ee1\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-sb4f6" Apr 23 17:58:15.418545 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.418494 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/6c2d8786-f36a-4e54-a020-66da9e674ee1-device-dir\") pod \"aws-ebs-csi-driver-node-sb4f6\" (UID: \"6c2d8786-f36a-4e54-a020-66da9e674ee1\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-sb4f6" Apr 23 17:58:15.418604 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.418541 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gdtc2\" (UniqueName: \"kubernetes.io/projected/6c2d8786-f36a-4e54-a020-66da9e674ee1-kube-api-access-gdtc2\") pod \"aws-ebs-csi-driver-node-sb4f6\" (UID: \"6c2d8786-f36a-4e54-a020-66da9e674ee1\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-sb4f6" Apr 23 17:58:15.418604 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.418569 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6c2d8786-f36a-4e54-a020-66da9e674ee1-socket-dir\") pod \"aws-ebs-csi-driver-node-sb4f6\" (UID: \"6c2d8786-f36a-4e54-a020-66da9e674ee1\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-sb4f6" Apr 23 17:58:15.418604 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.418593 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6c2d8786-f36a-4e54-a020-66da9e674ee1-registration-dir\") pod \"aws-ebs-csi-driver-node-sb4f6\" (UID: \"6c2d8786-f36a-4e54-a020-66da9e674ee1\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-sb4f6" Apr 23 17:58:15.418724 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.418615 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/6c2d8786-f36a-4e54-a020-66da9e674ee1-etc-selinux\") pod \"aws-ebs-csi-driver-node-sb4f6\" (UID: \"6c2d8786-f36a-4e54-a020-66da9e674ee1\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-sb4f6" Apr 23 17:58:15.418724 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.418632 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tmmx9\" (UniqueName: \"kubernetes.io/projected/17e9a772-9316-4c67-bffe-e44ea2915f0f-kube-api-access-tmmx9\") pod \"network-check-target-dpfbr\" (UID: \"17e9a772-9316-4c67-bffe-e44ea2915f0f\") " pod="openshift-network-diagnostics/network-check-target-dpfbr" Apr 23 17:58:15.418724 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.418657 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c2d8786-f36a-4e54-a020-66da9e674ee1-kubelet-dir\") pod \"aws-ebs-csi-driver-node-sb4f6\" (UID: \"6c2d8786-f36a-4e54-a020-66da9e674ee1\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-sb4f6" Apr 23 17:58:15.418724 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.418681 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7s6sv\" (UniqueName: \"kubernetes.io/projected/465ff8c4-e8a9-4cb7-8353-e5f7d5a8b986-kube-api-access-7s6sv\") pod \"node-ca-9d9xv\" (UID: \"465ff8c4-e8a9-4cb7-8353-e5f7d5a8b986\") " pod="openshift-image-registry/node-ca-9d9xv" Apr 23 17:58:15.418724 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.418696 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/6c2d8786-f36a-4e54-a020-66da9e674ee1-etc-selinux\") pod \"aws-ebs-csi-driver-node-sb4f6\" (UID: \"6c2d8786-f36a-4e54-a020-66da9e674ee1\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-sb4f6" Apr 23 17:58:15.418948 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.418745 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c2d8786-f36a-4e54-a020-66da9e674ee1-kubelet-dir\") pod \"aws-ebs-csi-driver-node-sb4f6\" (UID: \"6c2d8786-f36a-4e54-a020-66da9e674ee1\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-sb4f6" Apr 23 17:58:15.418948 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.418779 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/465ff8c4-e8a9-4cb7-8353-e5f7d5a8b986-serviceca\") pod \"node-ca-9d9xv\" (UID: \"465ff8c4-e8a9-4cb7-8353-e5f7d5a8b986\") " pod="openshift-image-registry/node-ca-9d9xv" Apr 23 17:58:15.418948 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.418779 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6c2d8786-f36a-4e54-a020-66da9e674ee1-registration-dir\") pod \"aws-ebs-csi-driver-node-sb4f6\" (UID: \"6c2d8786-f36a-4e54-a020-66da9e674ee1\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-sb4f6" Apr 23 17:58:15.424947 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:15.424926 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:58:15.424947 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:15.424945 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:58:15.425143 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:15.424956 2572 projected.go:194] Error preparing data for projected volume kube-api-access-tmmx9 for pod openshift-network-diagnostics/network-check-target-dpfbr: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:58:15.425143 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:15.425017 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17e9a772-9316-4c67-bffe-e44ea2915f0f-kube-api-access-tmmx9 podName:17e9a772-9316-4c67-bffe-e44ea2915f0f nodeName:}" failed. No retries permitted until 2026-04-23 17:58:15.925000468 +0000 UTC m=+3.223246579 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tmmx9" (UniqueName: "kubernetes.io/projected/17e9a772-9316-4c67-bffe-e44ea2915f0f-kube-api-access-tmmx9") pod "network-check-target-dpfbr" (UID: "17e9a772-9316-4c67-bffe-e44ea2915f0f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:58:15.428224 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.428141 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s6sv\" (UniqueName: \"kubernetes.io/projected/465ff8c4-e8a9-4cb7-8353-e5f7d5a8b986-kube-api-access-7s6sv\") pod \"node-ca-9d9xv\" (UID: \"465ff8c4-e8a9-4cb7-8353-e5f7d5a8b986\") " pod="openshift-image-registry/node-ca-9d9xv" Apr 23 17:58:15.428224 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.428147 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdtc2\" (UniqueName: \"kubernetes.io/projected/6c2d8786-f36a-4e54-a020-66da9e674ee1-kube-api-access-gdtc2\") pod \"aws-ebs-csi-driver-node-sb4f6\" (UID: \"6c2d8786-f36a-4e54-a020-66da9e674ee1\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-sb4f6" Apr 23 17:58:15.501510 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.501470 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-4vkdl" Apr 23 17:58:15.508305 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.508275 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-69j8s" Apr 23 17:58:15.517869 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.517851 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lg6b8" Apr 23 17:58:15.524549 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.524532 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-vdzr8" Apr 23 17:58:15.532671 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.532648 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:15.538934 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.538915 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" Apr 23 17:58:15.545421 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.545399 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mggbx" Apr 23 17:58:15.551979 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.551959 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-b6xsg" Apr 23 17:58:15.558500 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.558481 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-sb4f6" Apr 23 17:58:15.565001 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.564984 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-9d9xv" Apr 23 17:58:15.663032 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.663002 2572 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:58:15.822341 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:15.822298 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9157db1-0537-4915-a273-5b7a482bc173-metrics-certs\") pod \"network-metrics-daemon-nh2kn\" (UID: \"d9157db1-0537-4915-a273-5b7a482bc173\") " pod="openshift-multus/network-metrics-daemon-nh2kn" Apr 23 17:58:15.822493 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:15.822419 2572 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:58:15.822493 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:15.822476 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9157db1-0537-4915-a273-5b7a482bc173-metrics-certs podName:d9157db1-0537-4915-a273-5b7a482bc173 nodeName:}" failed. No retries permitted until 2026-04-23 17:58:16.822460436 +0000 UTC m=+4.120706547 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d9157db1-0537-4915-a273-5b7a482bc173-metrics-certs") pod "network-metrics-daemon-nh2kn" (UID: "d9157db1-0537-4915-a273-5b7a482bc173") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:58:15.872041 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:15.871988 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f18ab0b_c24e_4d53_9d15_941a178305d9.slice/crio-a790249654f7d1de9775e7ddedc08f95b2cd31010d95d5554467688bbbb45140 WatchSource:0}: Error finding container a790249654f7d1de9775e7ddedc08f95b2cd31010d95d5554467688bbbb45140: Status 404 returned error can't find the container with id a790249654f7d1de9775e7ddedc08f95b2cd31010d95d5554467688bbbb45140 Apr 23 17:58:15.873366 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:15.873030 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod037e05f6_1827_4968_abeb_530665aa07ab.slice/crio-59c8601476549e3bc2990b08f527110bf176853c699f8059be8e39e5554acc75 WatchSource:0}: Error finding container 59c8601476549e3bc2990b08f527110bf176853c699f8059be8e39e5554acc75: Status 404 returned error can't find the container with id 59c8601476549e3bc2990b08f527110bf176853c699f8059be8e39e5554acc75 Apr 23 17:58:15.876599 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:15.876571 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c2d8786_f36a_4e54_a020_66da9e674ee1.slice/crio-76cec319fd1c458554d8ac08b0c1fb7afae3be2ed68c25a6d8fb84f4746e7f51 WatchSource:0}: Error finding container 76cec319fd1c458554d8ac08b0c1fb7afae3be2ed68c25a6d8fb84f4746e7f51: Status 404 returned error can't find the container with id 76cec319fd1c458554d8ac08b0c1fb7afae3be2ed68c25a6d8fb84f4746e7f51 Apr 23 17:58:15.877285 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:15.877267 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9571f146_c9fe_45ac_b2a7_1f4153d46c32.slice/crio-114daf1b4b86081cb15a62e084c1a3673fb310b8f5e987c4344b071220c75b19 WatchSource:0}: Error finding container 114daf1b4b86081cb15a62e084c1a3673fb310b8f5e987c4344b071220c75b19: Status 404 returned error can't find the container with id 114daf1b4b86081cb15a62e084c1a3673fb310b8f5e987c4344b071220c75b19 Apr 23 17:58:15.878389 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:15.878364 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a6da2ea_0b58_4e0b_957b_258095c2f013.slice/crio-3c1d69c2b2e0b8cf74aa09710b735575258bb7f69d3a04b0b994c66a081f8f08 WatchSource:0}: Error finding container 3c1d69c2b2e0b8cf74aa09710b735575258bb7f69d3a04b0b994c66a081f8f08: Status 404 returned error can't find the container with id 3c1d69c2b2e0b8cf74aa09710b735575258bb7f69d3a04b0b994c66a081f8f08 Apr 23 17:58:15.879171 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:15.879099 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba95391c_a044_45b6_b86c_e5c745e4e7d1.slice/crio-7c2bd3c30d68b27a845c9e067de5bdbc212cf1e630fc24e721114cd6becd6853 WatchSource:0}: Error finding container 7c2bd3c30d68b27a845c9e067de5bdbc212cf1e630fc24e721114cd6becd6853: Status 404 returned error can't find the container with id 7c2bd3c30d68b27a845c9e067de5bdbc212cf1e630fc24e721114cd6becd6853 Apr 23 17:58:15.880572 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:15.880200 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6985296e_1df6_4584_8a29_5fb68230893f.slice/crio-7f95fe5e2526e1f120d35bd3a2b096dd7117ac9987fe02dddac74ba133710d58 WatchSource:0}: Error finding container 7f95fe5e2526e1f120d35bd3a2b096dd7117ac9987fe02dddac74ba133710d58: Status 404 returned error can't find the container with id 7f95fe5e2526e1f120d35bd3a2b096dd7117ac9987fe02dddac74ba133710d58 Apr 23 17:58:15.880975 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:15.880946 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod465ff8c4_e8a9_4cb7_8353_e5f7d5a8b986.slice/crio-889d0c04e45ccc474bbe9775a4d619d70b82f9a3f43b904a543e684295dac746 WatchSource:0}: Error finding container 889d0c04e45ccc474bbe9775a4d619d70b82f9a3f43b904a543e684295dac746: Status 404 returned error can't find the container with id 889d0c04e45ccc474bbe9775a4d619d70b82f9a3f43b904a543e684295dac746 Apr 23 17:58:15.882822 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:15.882799 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9b51bff_ccdd_40d2_a1c6_c65fe8cff43a.slice/crio-14e695aa8111731c2ca5947185b088f1c528a41fb47436c41d11ee271436793f WatchSource:0}: Error finding container 14e695aa8111731c2ca5947185b088f1c528a41fb47436c41d11ee271436793f: Status 404 returned error can't find the container with id 14e695aa8111731c2ca5947185b088f1c528a41fb47436c41d11ee271436793f Apr 23 17:58:15.883683 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:15.883660 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41511472_f1af_4c98_ab11_9729dc21519e.slice/crio-d3666067e03bb57bd75ce030fdecdb560c92c4884ff112545546ac39b1fcc05f WatchSource:0}: Error finding container d3666067e03bb57bd75ce030fdecdb560c92c4884ff112545546ac39b1fcc05f: Status 404 returned error can't find the container with id d3666067e03bb57bd75ce030fdecdb560c92c4884ff112545546ac39b1fcc05f Apr 23 17:58:16.024378 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:16.024343 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tmmx9\" (UniqueName: \"kubernetes.io/projected/17e9a772-9316-4c67-bffe-e44ea2915f0f-kube-api-access-tmmx9\") pod \"network-check-target-dpfbr\" (UID: \"17e9a772-9316-4c67-bffe-e44ea2915f0f\") " pod="openshift-network-diagnostics/network-check-target-dpfbr" Apr 23 17:58:16.024517 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:16.024450 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:58:16.024517 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:16.024463 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:58:16.024517 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:16.024472 2572 projected.go:194] Error preparing data for projected volume kube-api-access-tmmx9 for pod openshift-network-diagnostics/network-check-target-dpfbr: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:58:16.024517 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:16.024515 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17e9a772-9316-4c67-bffe-e44ea2915f0f-kube-api-access-tmmx9 podName:17e9a772-9316-4c67-bffe-e44ea2915f0f nodeName:}" failed. No retries permitted until 2026-04-23 17:58:17.02450128 +0000 UTC m=+4.322747392 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tmmx9" (UniqueName: "kubernetes.io/projected/17e9a772-9316-4c67-bffe-e44ea2915f0f-kube-api-access-tmmx9") pod "network-check-target-dpfbr" (UID: "17e9a772-9316-4c67-bffe-e44ea2915f0f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:58:16.241215 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:16.241145 2572 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-22 17:53:14 +0000 UTC" deadline="2027-11-18 19:40:54.46140905 +0000 UTC" Apr 23 17:58:16.241215 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:16.241181 2572 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="13777h42m38.220231807s" Apr 23 17:58:16.306288 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:16.304575 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-dpfbr" Apr 23 17:58:16.306288 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:16.304713 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-dpfbr" podUID="17e9a772-9316-4c67-bffe-e44ea2915f0f" Apr 23 17:58:16.317528 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:16.316377 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-4vkdl" event={"ID":"037e05f6-1827-4968-abeb-530665aa07ab","Type":"ContainerStarted","Data":"59c8601476549e3bc2990b08f527110bf176853c699f8059be8e39e5554acc75"} Apr 23 17:58:16.322870 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:16.322837 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lg6b8" event={"ID":"8f18ab0b-c24e-4d53-9d15-941a178305d9","Type":"ContainerStarted","Data":"a790249654f7d1de9775e7ddedc08f95b2cd31010d95d5554467688bbbb45140"} Apr 23 17:58:16.337607 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:16.337566 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-69j8s" event={"ID":"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a","Type":"ContainerStarted","Data":"14e695aa8111731c2ca5947185b088f1c528a41fb47436c41d11ee271436793f"} Apr 23 17:58:16.340884 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:16.340854 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-9d9xv" event={"ID":"465ff8c4-e8a9-4cb7-8353-e5f7d5a8b986","Type":"ContainerStarted","Data":"889d0c04e45ccc474bbe9775a4d619d70b82f9a3f43b904a543e684295dac746"} Apr 23 17:58:16.348453 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:16.348429 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rz688" event={"ID":"ba95391c-a044-45b6-b86c-e5c745e4e7d1","Type":"ContainerStarted","Data":"7c2bd3c30d68b27a845c9e067de5bdbc212cf1e630fc24e721114cd6becd6853"} Apr 23 17:58:16.353612 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:16.353582 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mggbx" event={"ID":"4a6da2ea-0b58-4e0b-957b-258095c2f013","Type":"ContainerStarted","Data":"3c1d69c2b2e0b8cf74aa09710b735575258bb7f69d3a04b0b994c66a081f8f08"} Apr 23 17:58:16.360803 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:16.360778 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-b6xsg" event={"ID":"9571f146-c9fe-45ac-b2a7-1f4153d46c32","Type":"ContainerStarted","Data":"114daf1b4b86081cb15a62e084c1a3673fb310b8f5e987c4344b071220c75b19"} Apr 23 17:58:16.366282 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:16.366229 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-130-162.ec2.internal" event={"ID":"55d1c989d003a1c5d6c5adfec051c073","Type":"ContainerStarted","Data":"a9cc25461cc360baffed747a7d550e405a8b20911280d71e8f8c6b437587233d"} Apr 23 17:58:16.376730 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:16.376704 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-vdzr8" event={"ID":"41511472-f1af-4c98-ab11-9729dc21519e","Type":"ContainerStarted","Data":"d3666067e03bb57bd75ce030fdecdb560c92c4884ff112545546ac39b1fcc05f"} Apr 23 17:58:16.391175 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:16.391145 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" event={"ID":"6985296e-1df6-4584-8a29-5fb68230893f","Type":"ContainerStarted","Data":"7f95fe5e2526e1f120d35bd3a2b096dd7117ac9987fe02dddac74ba133710d58"} Apr 23 17:58:16.401989 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:16.401963 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-sb4f6" event={"ID":"6c2d8786-f36a-4e54-a020-66da9e674ee1","Type":"ContainerStarted","Data":"76cec319fd1c458554d8ac08b0c1fb7afae3be2ed68c25a6d8fb84f4746e7f51"} Apr 23 17:58:16.831754 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:16.831715 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9157db1-0537-4915-a273-5b7a482bc173-metrics-certs\") pod \"network-metrics-daemon-nh2kn\" (UID: \"d9157db1-0537-4915-a273-5b7a482bc173\") " pod="openshift-multus/network-metrics-daemon-nh2kn" Apr 23 17:58:16.831925 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:16.831893 2572 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:58:16.832000 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:16.831959 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9157db1-0537-4915-a273-5b7a482bc173-metrics-certs podName:d9157db1-0537-4915-a273-5b7a482bc173 nodeName:}" failed. No retries permitted until 2026-04-23 17:58:18.831938664 +0000 UTC m=+6.130184776 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d9157db1-0537-4915-a273-5b7a482bc173-metrics-certs") pod "network-metrics-daemon-nh2kn" (UID: "d9157db1-0537-4915-a273-5b7a482bc173") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:58:17.034601 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:17.034572 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tmmx9\" (UniqueName: \"kubernetes.io/projected/17e9a772-9316-4c67-bffe-e44ea2915f0f-kube-api-access-tmmx9\") pod \"network-check-target-dpfbr\" (UID: \"17e9a772-9316-4c67-bffe-e44ea2915f0f\") " pod="openshift-network-diagnostics/network-check-target-dpfbr" Apr 23 17:58:17.034744 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:17.034728 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:58:17.034810 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:17.034751 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:58:17.034810 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:17.034788 2572 projected.go:194] Error preparing data for projected volume kube-api-access-tmmx9 for pod openshift-network-diagnostics/network-check-target-dpfbr: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:58:17.034927 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:17.034846 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17e9a772-9316-4c67-bffe-e44ea2915f0f-kube-api-access-tmmx9 podName:17e9a772-9316-4c67-bffe-e44ea2915f0f nodeName:}" failed. No retries permitted until 2026-04-23 17:58:19.034827118 +0000 UTC m=+6.333073252 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tmmx9" (UniqueName: "kubernetes.io/projected/17e9a772-9316-4c67-bffe-e44ea2915f0f-kube-api-access-tmmx9") pod "network-check-target-dpfbr" (UID: "17e9a772-9316-4c67-bffe-e44ea2915f0f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:58:17.075500 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:17.075312 2572 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:58:17.305633 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:17.305566 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nh2kn" Apr 23 17:58:17.306065 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:17.305865 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nh2kn" podUID="d9157db1-0537-4915-a273-5b7a482bc173" Apr 23 17:58:17.421454 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:17.421223 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-130-162.ec2.internal" event={"ID":"0b2178ea263236be918f2343e8d4bd48","Type":"ContainerStarted","Data":"9e15ed95e0478b99f5d0d5c399b90d22bf35731bf254ffc52b7e265f73706529"} Apr 23 17:58:17.437199 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:17.436468 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-proxy-ip-10-0-130-162.ec2.internal" podStartSLOduration=3.436448972 podStartE2EDuration="3.436448972s" podCreationTimestamp="2026-04-23 17:58:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:58:16.383374686 +0000 UTC m=+3.681620821" watchObservedRunningTime="2026-04-23 17:58:17.436448972 +0000 UTC m=+4.734695107" Apr 23 17:58:18.303785 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:18.303751 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-dpfbr" Apr 23 17:58:18.303973 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:18.303882 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-dpfbr" podUID="17e9a772-9316-4c67-bffe-e44ea2915f0f" Apr 23 17:58:18.425819 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:18.425780 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-b6xsg" event={"ID":"9571f146-c9fe-45ac-b2a7-1f4153d46c32","Type":"ContainerStarted","Data":"fa3d4d7387d41f0b3ad1c5764cdbf16a03bf38755bc4f015816d6085e281365c"} Apr 23 17:58:18.434149 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:18.434119 2572 generic.go:358] "Generic (PLEG): container finished" podID="0b2178ea263236be918f2343e8d4bd48" containerID="9e15ed95e0478b99f5d0d5c399b90d22bf35731bf254ffc52b7e265f73706529" exitCode=0 Apr 23 17:58:18.434272 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:18.434172 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-130-162.ec2.internal" event={"ID":"0b2178ea263236be918f2343e8d4bd48","Type":"ContainerDied","Data":"9e15ed95e0478b99f5d0d5c399b90d22bf35731bf254ffc52b7e265f73706529"} Apr 23 17:58:18.853224 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:18.852599 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9157db1-0537-4915-a273-5b7a482bc173-metrics-certs\") pod \"network-metrics-daemon-nh2kn\" (UID: \"d9157db1-0537-4915-a273-5b7a482bc173\") " pod="openshift-multus/network-metrics-daemon-nh2kn" Apr 23 17:58:18.853224 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:18.852770 2572 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:58:18.853224 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:18.852834 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9157db1-0537-4915-a273-5b7a482bc173-metrics-certs podName:d9157db1-0537-4915-a273-5b7a482bc173 nodeName:}" failed. No retries permitted until 2026-04-23 17:58:22.852814627 +0000 UTC m=+10.151060757 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d9157db1-0537-4915-a273-5b7a482bc173-metrics-certs") pod "network-metrics-daemon-nh2kn" (UID: "d9157db1-0537-4915-a273-5b7a482bc173") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:58:19.054423 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:19.054386 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tmmx9\" (UniqueName: \"kubernetes.io/projected/17e9a772-9316-4c67-bffe-e44ea2915f0f-kube-api-access-tmmx9\") pod \"network-check-target-dpfbr\" (UID: \"17e9a772-9316-4c67-bffe-e44ea2915f0f\") " pod="openshift-network-diagnostics/network-check-target-dpfbr" Apr 23 17:58:19.054706 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:19.054607 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:58:19.054706 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:19.054631 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:58:19.054706 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:19.054645 2572 projected.go:194] Error preparing data for projected volume kube-api-access-tmmx9 for pod openshift-network-diagnostics/network-check-target-dpfbr: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:58:19.054706 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:19.054706 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17e9a772-9316-4c67-bffe-e44ea2915f0f-kube-api-access-tmmx9 podName:17e9a772-9316-4c67-bffe-e44ea2915f0f nodeName:}" failed. No retries permitted until 2026-04-23 17:58:23.054687642 +0000 UTC m=+10.352933761 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tmmx9" (UniqueName: "kubernetes.io/projected/17e9a772-9316-4c67-bffe-e44ea2915f0f-kube-api-access-tmmx9") pod "network-check-target-dpfbr" (UID: "17e9a772-9316-4c67-bffe-e44ea2915f0f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:58:19.304247 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:19.304213 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nh2kn" Apr 23 17:58:19.304444 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:19.304369 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nh2kn" podUID="d9157db1-0537-4915-a273-5b7a482bc173" Apr 23 17:58:19.438789 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:19.438682 2572 generic.go:358] "Generic (PLEG): container finished" podID="9571f146-c9fe-45ac-b2a7-1f4153d46c32" containerID="fa3d4d7387d41f0b3ad1c5764cdbf16a03bf38755bc4f015816d6085e281365c" exitCode=0 Apr 23 17:58:19.438789 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:19.438738 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-b6xsg" event={"ID":"9571f146-c9fe-45ac-b2a7-1f4153d46c32","Type":"ContainerDied","Data":"fa3d4d7387d41f0b3ad1c5764cdbf16a03bf38755bc4f015816d6085e281365c"} Apr 23 17:58:20.304547 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:20.304513 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-dpfbr" Apr 23 17:58:20.304767 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:20.304659 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-dpfbr" podUID="17e9a772-9316-4c67-bffe-e44ea2915f0f" Apr 23 17:58:21.304056 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:21.303934 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nh2kn" Apr 23 17:58:21.304532 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:21.304083 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nh2kn" podUID="d9157db1-0537-4915-a273-5b7a482bc173" Apr 23 17:58:22.304524 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:22.304489 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-dpfbr" Apr 23 17:58:22.304969 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:22.304686 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-dpfbr" podUID="17e9a772-9316-4c67-bffe-e44ea2915f0f" Apr 23 17:58:22.886730 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:22.886659 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9157db1-0537-4915-a273-5b7a482bc173-metrics-certs\") pod \"network-metrics-daemon-nh2kn\" (UID: \"d9157db1-0537-4915-a273-5b7a482bc173\") " pod="openshift-multus/network-metrics-daemon-nh2kn" Apr 23 17:58:22.886894 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:22.886792 2572 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:58:22.886894 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:22.886881 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9157db1-0537-4915-a273-5b7a482bc173-metrics-certs podName:d9157db1-0537-4915-a273-5b7a482bc173 nodeName:}" failed. No retries permitted until 2026-04-23 17:58:30.886849505 +0000 UTC m=+18.185095618 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d9157db1-0537-4915-a273-5b7a482bc173-metrics-certs") pod "network-metrics-daemon-nh2kn" (UID: "d9157db1-0537-4915-a273-5b7a482bc173") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:58:23.088606 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:23.087943 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tmmx9\" (UniqueName: \"kubernetes.io/projected/17e9a772-9316-4c67-bffe-e44ea2915f0f-kube-api-access-tmmx9\") pod \"network-check-target-dpfbr\" (UID: \"17e9a772-9316-4c67-bffe-e44ea2915f0f\") " pod="openshift-network-diagnostics/network-check-target-dpfbr" Apr 23 17:58:23.088606 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:23.088152 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:58:23.088606 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:23.088171 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:58:23.088606 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:23.088186 2572 projected.go:194] Error preparing data for projected volume kube-api-access-tmmx9 for pod openshift-network-diagnostics/network-check-target-dpfbr: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:58:23.088606 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:23.088242 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17e9a772-9316-4c67-bffe-e44ea2915f0f-kube-api-access-tmmx9 podName:17e9a772-9316-4c67-bffe-e44ea2915f0f nodeName:}" failed. No retries permitted until 2026-04-23 17:58:31.088224669 +0000 UTC m=+18.386470803 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-tmmx9" (UniqueName: "kubernetes.io/projected/17e9a772-9316-4c67-bffe-e44ea2915f0f-kube-api-access-tmmx9") pod "network-check-target-dpfbr" (UID: "17e9a772-9316-4c67-bffe-e44ea2915f0f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:58:23.305614 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:23.304863 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nh2kn" Apr 23 17:58:23.305614 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:23.305206 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nh2kn" podUID="d9157db1-0537-4915-a273-5b7a482bc173" Apr 23 17:58:24.304115 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:24.304077 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-dpfbr" Apr 23 17:58:24.304427 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:24.304207 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-dpfbr" podUID="17e9a772-9316-4c67-bffe-e44ea2915f0f" Apr 23 17:58:25.304044 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:25.304006 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nh2kn" Apr 23 17:58:25.304448 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:25.304129 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nh2kn" podUID="d9157db1-0537-4915-a273-5b7a482bc173" Apr 23 17:58:26.304031 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:26.303996 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-dpfbr" Apr 23 17:58:26.304203 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:26.304125 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-dpfbr" podUID="17e9a772-9316-4c67-bffe-e44ea2915f0f" Apr 23 17:58:27.303881 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:27.303849 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nh2kn" Apr 23 17:58:27.304061 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:27.303986 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nh2kn" podUID="d9157db1-0537-4915-a273-5b7a482bc173" Apr 23 17:58:28.304336 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:28.304282 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-dpfbr" Apr 23 17:58:28.304699 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:28.304456 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-dpfbr" podUID="17e9a772-9316-4c67-bffe-e44ea2915f0f" Apr 23 17:58:29.303590 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:29.303550 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nh2kn" Apr 23 17:58:29.303759 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:29.303695 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nh2kn" podUID="d9157db1-0537-4915-a273-5b7a482bc173" Apr 23 17:58:30.303893 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:30.303854 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-dpfbr" Apr 23 17:58:30.304390 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:30.303977 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-dpfbr" podUID="17e9a772-9316-4c67-bffe-e44ea2915f0f" Apr 23 17:58:30.945380 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:30.945337 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9157db1-0537-4915-a273-5b7a482bc173-metrics-certs\") pod \"network-metrics-daemon-nh2kn\" (UID: \"d9157db1-0537-4915-a273-5b7a482bc173\") " pod="openshift-multus/network-metrics-daemon-nh2kn" Apr 23 17:58:30.945527 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:30.945495 2572 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:58:30.945579 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:30.945568 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9157db1-0537-4915-a273-5b7a482bc173-metrics-certs podName:d9157db1-0537-4915-a273-5b7a482bc173 nodeName:}" failed. No retries permitted until 2026-04-23 17:58:46.945545031 +0000 UTC m=+34.243791142 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d9157db1-0537-4915-a273-5b7a482bc173-metrics-certs") pod "network-metrics-daemon-nh2kn" (UID: "d9157db1-0537-4915-a273-5b7a482bc173") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:58:31.146750 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:31.146716 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tmmx9\" (UniqueName: \"kubernetes.io/projected/17e9a772-9316-4c67-bffe-e44ea2915f0f-kube-api-access-tmmx9\") pod \"network-check-target-dpfbr\" (UID: \"17e9a772-9316-4c67-bffe-e44ea2915f0f\") " pod="openshift-network-diagnostics/network-check-target-dpfbr" Apr 23 17:58:31.146922 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:31.146879 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:58:31.146922 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:31.146900 2572 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:58:31.146922 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:31.146910 2572 projected.go:194] Error preparing data for projected volume kube-api-access-tmmx9 for pod openshift-network-diagnostics/network-check-target-dpfbr: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:58:31.147077 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:31.146961 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17e9a772-9316-4c67-bffe-e44ea2915f0f-kube-api-access-tmmx9 podName:17e9a772-9316-4c67-bffe-e44ea2915f0f nodeName:}" failed. No retries permitted until 2026-04-23 17:58:47.146946118 +0000 UTC m=+34.445192229 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-tmmx9" (UniqueName: "kubernetes.io/projected/17e9a772-9316-4c67-bffe-e44ea2915f0f-kube-api-access-tmmx9") pod "network-check-target-dpfbr" (UID: "17e9a772-9316-4c67-bffe-e44ea2915f0f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:58:31.307514 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:31.307482 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nh2kn" Apr 23 17:58:31.307913 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:31.307608 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nh2kn" podUID="d9157db1-0537-4915-a273-5b7a482bc173" Apr 23 17:58:32.304413 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:32.304370 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-dpfbr" Apr 23 17:58:32.304615 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:32.304508 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-dpfbr" podUID="17e9a772-9316-4c67-bffe-e44ea2915f0f" Apr 23 17:58:33.335137 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:33.335112 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nh2kn" Apr 23 17:58:33.335939 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:33.335236 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nh2kn" podUID="d9157db1-0537-4915-a273-5b7a482bc173" Apr 23 17:58:33.465525 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:33.465176 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" event={"ID":"6985296e-1df6-4584-8a29-5fb68230893f","Type":"ContainerStarted","Data":"2c27fb10752335e10eff7c655f91b92b38123683f5d0e223b3bd4dbfdc877128"} Apr 23 17:58:33.470308 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:33.470275 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-69j8s" event={"ID":"e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a","Type":"ContainerStarted","Data":"1bb6253424d141b42d9a35f1aedea332e729c79b689490e08f057107daf927f7"} Apr 23 17:58:33.473425 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:33.473292 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rz688" event={"ID":"ba95391c-a044-45b6-b86c-e5c745e4e7d1","Type":"ContainerStarted","Data":"ccec370a68f0c5a6b23c766ebf2dc6180c1aa164c51e06e2dad1fbe7cfb7865a"} Apr 23 17:58:33.475167 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:33.475138 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mggbx" event={"ID":"4a6da2ea-0b58-4e0b-957b-258095c2f013","Type":"ContainerStarted","Data":"bede439cd3f63bdeb5168f978d150425cc839edde48c5dcbfd3b01369322f748"} Apr 23 17:58:33.479222 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:33.478396 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-b6xsg" event={"ID":"9571f146-c9fe-45ac-b2a7-1f4153d46c32","Type":"ContainerStarted","Data":"fd26667add6c7d0a90562c9ddbff8a688b106405388ff2ef89f203dddaa89e2e"} Apr 23 17:58:33.479222 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:33.478429 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-b6xsg" event={"ID":"9571f146-c9fe-45ac-b2a7-1f4153d46c32","Type":"ContainerStarted","Data":"d92234c9a1228f1a0f1ef99c2cfa1b36d519506a12979a7445535793012df338"} Apr 23 17:58:33.482648 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:33.482156 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-130-162.ec2.internal" event={"ID":"0b2178ea263236be918f2343e8d4bd48","Type":"ContainerStarted","Data":"fa1ec43beb239922d3337b7e215bb4f33a6579090e487d692dc73c36f9835b39"} Apr 23 17:58:33.485997 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:33.485958 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-rf4xp" podStartSLOduration=3.28547339 podStartE2EDuration="20.48594658s" podCreationTimestamp="2026-04-23 17:58:13 +0000 UTC" firstStartedPulling="2026-04-23 17:58:15.882310998 +0000 UTC m=+3.180557120" lastFinishedPulling="2026-04-23 17:58:33.082784195 +0000 UTC m=+20.381030310" observedRunningTime="2026-04-23 17:58:33.485638317 +0000 UTC m=+20.783884449" watchObservedRunningTime="2026-04-23 17:58:33.48594658 +0000 UTC m=+20.784192713" Apr 23 17:58:33.504351 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:33.504221 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-mggbx" podStartSLOduration=3.302285097 podStartE2EDuration="20.504202568s" podCreationTimestamp="2026-04-23 17:58:13 +0000 UTC" firstStartedPulling="2026-04-23 17:58:15.881179985 +0000 UTC m=+3.179426111" lastFinishedPulling="2026-04-23 17:58:33.083097458 +0000 UTC m=+20.381343582" observedRunningTime="2026-04-23 17:58:33.503387939 +0000 UTC m=+20.801634072" watchObservedRunningTime="2026-04-23 17:58:33.504202568 +0000 UTC m=+20.802448702" Apr 23 17:58:33.537790 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:33.537737 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-130-162.ec2.internal" podStartSLOduration=19.537720432 podStartE2EDuration="19.537720432s" podCreationTimestamp="2026-04-23 17:58:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:58:33.537562912 +0000 UTC m=+20.835809046" watchObservedRunningTime="2026-04-23 17:58:33.537720432 +0000 UTC m=+20.835966565" Apr 23 17:58:33.538100 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:33.538065 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-b6xsg" podStartSLOduration=18.64807039 podStartE2EDuration="20.538054598s" podCreationTimestamp="2026-04-23 17:58:13 +0000 UTC" firstStartedPulling="2026-04-23 17:58:15.879940572 +0000 UTC m=+3.178186683" lastFinishedPulling="2026-04-23 17:58:17.769924775 +0000 UTC m=+5.068170891" observedRunningTime="2026-04-23 17:58:33.523252449 +0000 UTC m=+20.821498588" watchObservedRunningTime="2026-04-23 17:58:33.538054598 +0000 UTC m=+20.836300731" Apr 23 17:58:33.557291 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:33.557236 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-69j8s" podStartSLOduration=3.320827993 podStartE2EDuration="20.55721989s" podCreationTimestamp="2026-04-23 17:58:13 +0000 UTC" firstStartedPulling="2026-04-23 17:58:15.885173187 +0000 UTC m=+3.183419298" lastFinishedPulling="2026-04-23 17:58:33.12156507 +0000 UTC m=+20.419811195" observedRunningTime="2026-04-23 17:58:33.554547505 +0000 UTC m=+20.852793641" watchObservedRunningTime="2026-04-23 17:58:33.55721989 +0000 UTC m=+20.855466023" Apr 23 17:58:34.303745 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:34.303708 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-dpfbr" Apr 23 17:58:34.303881 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:34.303843 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-dpfbr" podUID="17e9a772-9316-4c67-bffe-e44ea2915f0f" Apr 23 17:58:34.486609 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:34.486210 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-9d9xv" event={"ID":"465ff8c4-e8a9-4cb7-8353-e5f7d5a8b986","Type":"ContainerStarted","Data":"0deed44614a53999871228af3168d6322cf3c56dd1b68f027a5e601bd1aa6dcb"} Apr 23 17:58:34.489377 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:34.489152 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rz688" event={"ID":"ba95391c-a044-45b6-b86c-e5c745e4e7d1","Type":"ContainerStarted","Data":"f0df758675d06c428808986695ce29f5496a59c115e85c5cf27ac5b7d8ae9c50"} Apr 23 17:58:34.489377 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:34.489183 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rz688" event={"ID":"ba95391c-a044-45b6-b86c-e5c745e4e7d1","Type":"ContainerStarted","Data":"57222d872c87369bfb6ce7aa2a3a01472f19f05b7635375a308185dc4f22fa5f"} Apr 23 17:58:34.489377 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:34.489195 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rz688" event={"ID":"ba95391c-a044-45b6-b86c-e5c745e4e7d1","Type":"ContainerStarted","Data":"9f598d5a84436f278c11e9efcd47dcadccc141ae8cc471bd466fed0a4f6ed5c6"} Apr 23 17:58:34.489377 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:34.489208 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rz688" event={"ID":"ba95391c-a044-45b6-b86c-e5c745e4e7d1","Type":"ContainerStarted","Data":"18e0ed05fe17f0332da02655efebef1c632d3cff972e3e7abfe75b1585e70f0a"} Apr 23 17:58:34.489377 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:34.489220 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rz688" event={"ID":"ba95391c-a044-45b6-b86c-e5c745e4e7d1","Type":"ContainerStarted","Data":"a92288b2b24a35f6c91209c59e2da18f5c7f4826959a01f2a72a9132a3d10f79"} Apr 23 17:58:34.490643 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:34.490616 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-vdzr8" event={"ID":"41511472-f1af-4c98-ab11-9729dc21519e","Type":"ContainerStarted","Data":"cf8cf5a11d8fcb4cc4e91fe045d4bc6016fd279316be637b03bccd9a3a2abccf"} Apr 23 17:58:34.492072 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:34.492048 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-sb4f6" event={"ID":"6c2d8786-f36a-4e54-a020-66da9e674ee1","Type":"ContainerStarted","Data":"be0ef0f6d87ef262a52a334e06a3bf13850818cda3f977ad7d133ac10439faa7"} Apr 23 17:58:34.493495 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:34.493469 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-4vkdl" event={"ID":"037e05f6-1827-4968-abeb-530665aa07ab","Type":"ContainerStarted","Data":"74da28b76a2fa2b5be1d2f8b6387cf3f84546865ec1c1826eddee7f19feb1c46"} Apr 23 17:58:34.495144 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:34.495123 2572 generic.go:358] "Generic (PLEG): container finished" podID="8f18ab0b-c24e-4d53-9d15-941a178305d9" containerID="cf8dbae429634c2d6d8147bcfe7156e1be3aed6895020ae0cd70fe0f3f7aa5f7" exitCode=0 Apr 23 17:58:34.495272 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:34.495254 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lg6b8" event={"ID":"8f18ab0b-c24e-4d53-9d15-941a178305d9","Type":"ContainerDied","Data":"cf8dbae429634c2d6d8147bcfe7156e1be3aed6895020ae0cd70fe0f3f7aa5f7"} Apr 23 17:58:34.501377 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:34.501337 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-9d9xv" podStartSLOduration=4.301143421 podStartE2EDuration="21.501310357s" podCreationTimestamp="2026-04-23 17:58:13 +0000 UTC" firstStartedPulling="2026-04-23 17:58:15.882664087 +0000 UTC m=+3.180910197" lastFinishedPulling="2026-04-23 17:58:33.082831011 +0000 UTC m=+20.381077133" observedRunningTime="2026-04-23 17:58:34.501286474 +0000 UTC m=+21.799532604" watchObservedRunningTime="2026-04-23 17:58:34.501310357 +0000 UTC m=+21.799556472" Apr 23 17:58:34.516887 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:34.516842 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/iptables-alerter-vdzr8" podStartSLOduration=4.319726703 podStartE2EDuration="21.516832543s" podCreationTimestamp="2026-04-23 17:58:13 +0000 UTC" firstStartedPulling="2026-04-23 17:58:15.885738042 +0000 UTC m=+3.183984157" lastFinishedPulling="2026-04-23 17:58:33.082843869 +0000 UTC m=+20.381089997" observedRunningTime="2026-04-23 17:58:34.516505828 +0000 UTC m=+21.814751962" watchObservedRunningTime="2026-04-23 17:58:34.516832543 +0000 UTC m=+21.815078675" Apr 23 17:58:34.558441 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:34.558370 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/konnectivity-agent-4vkdl" podStartSLOduration=4.350393025 podStartE2EDuration="21.558355673s" podCreationTimestamp="2026-04-23 17:58:13 +0000 UTC" firstStartedPulling="2026-04-23 17:58:15.875142155 +0000 UTC m=+3.173388266" lastFinishedPulling="2026-04-23 17:58:33.083104789 +0000 UTC m=+20.381350914" observedRunningTime="2026-04-23 17:58:34.558061046 +0000 UTC m=+21.856307178" watchObservedRunningTime="2026-04-23 17:58:34.558355673 +0000 UTC m=+21.856601806" Apr 23 17:58:34.590627 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:34.590604 2572 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock" Apr 23 17:58:35.279332 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:35.279170 2572 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock","Timestamp":"2026-04-23T17:58:34.590621539Z","UUID":"0381bf16-dcff-4e6d-9664-942d27e3c311","Handler":null,"Name":"","Endpoint":""} Apr 23 17:58:35.281027 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:35.281006 2572 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: ebs.csi.aws.com endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock versions: 1.0.0 Apr 23 17:58:35.281027 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:35.281032 2572 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: ebs.csi.aws.com at endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock Apr 23 17:58:35.304481 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:35.304455 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nh2kn" Apr 23 17:58:35.304784 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:35.304750 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nh2kn" podUID="d9157db1-0537-4915-a273-5b7a482bc173" Apr 23 17:58:35.499652 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:35.499601 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-sb4f6" event={"ID":"6c2d8786-f36a-4e54-a020-66da9e674ee1","Type":"ContainerStarted","Data":"9e4b5550cff67f18fb35a87ce878467f262ea66ca975748c03b941813e923e24"} Apr 23 17:58:36.303783 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:36.303752 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-dpfbr" Apr 23 17:58:36.303938 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:36.303846 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-dpfbr" podUID="17e9a772-9316-4c67-bffe-e44ea2915f0f" Apr 23 17:58:36.503148 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:36.503110 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-sb4f6" event={"ID":"6c2d8786-f36a-4e54-a020-66da9e674ee1","Type":"ContainerStarted","Data":"d35da73cf308479c6d317154978ecd4bbedb71268cf3dc2da79c70ff224b3fdd"} Apr 23 17:58:36.506289 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:36.506253 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rz688" event={"ID":"ba95391c-a044-45b6-b86c-e5c745e4e7d1","Type":"ContainerStarted","Data":"7cd67de0bacd1ab646c7c786185a09fa493ce3aab4c4a0c56af5588d5dbe8ea8"} Apr 23 17:58:36.522823 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:36.522771 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-sb4f6" podStartSLOduration=3.267712385 podStartE2EDuration="23.522753383s" podCreationTimestamp="2026-04-23 17:58:13 +0000 UTC" firstStartedPulling="2026-04-23 17:58:15.878561351 +0000 UTC m=+3.176807478" lastFinishedPulling="2026-04-23 17:58:36.13360236 +0000 UTC m=+23.431848476" observedRunningTime="2026-04-23 17:58:36.522352642 +0000 UTC m=+23.820598775" watchObservedRunningTime="2026-04-23 17:58:36.522753383 +0000 UTC m=+23.820999518" Apr 23 17:58:37.304179 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:37.304143 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nh2kn" Apr 23 17:58:37.304364 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:37.304267 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nh2kn" podUID="d9157db1-0537-4915-a273-5b7a482bc173" Apr 23 17:58:38.304420 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:38.304389 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-dpfbr" Apr 23 17:58:38.304794 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:38.304497 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-dpfbr" podUID="17e9a772-9316-4c67-bffe-e44ea2915f0f" Apr 23 17:58:38.512742 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:38.512708 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rz688" event={"ID":"ba95391c-a044-45b6-b86c-e5c745e4e7d1","Type":"ContainerStarted","Data":"e410156bd609fc684d7ecf83b95d8457ff4250dcf35a6df1fdf50018cc97587c"} Apr 23 17:58:38.513152 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:38.513113 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:38.513152 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:38.513142 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:38.513277 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:38.513156 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:38.528710 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:38.528681 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:38.528802 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:38.528750 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:58:38.572672 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:38.572590 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-rz688" podStartSLOduration=8.288140903 podStartE2EDuration="25.57257672s" podCreationTimestamp="2026-04-23 17:58:13 +0000 UTC" firstStartedPulling="2026-04-23 17:58:15.881114351 +0000 UTC m=+3.179360476" lastFinishedPulling="2026-04-23 17:58:33.165550168 +0000 UTC m=+20.463796293" observedRunningTime="2026-04-23 17:58:38.544362839 +0000 UTC m=+25.842608972" watchObservedRunningTime="2026-04-23 17:58:38.57257672 +0000 UTC m=+25.870822864" Apr 23 17:58:38.737632 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:38.737609 2572 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/konnectivity-agent-4vkdl" Apr 23 17:58:38.738146 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:38.738128 2572 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/konnectivity-agent-4vkdl" Apr 23 17:58:39.304605 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:39.304417 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nh2kn" Apr 23 17:58:39.305019 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:39.304690 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nh2kn" podUID="d9157db1-0537-4915-a273-5b7a482bc173" Apr 23 17:58:39.516171 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:39.516136 2572 generic.go:358] "Generic (PLEG): container finished" podID="8f18ab0b-c24e-4d53-9d15-941a178305d9" containerID="11372a718f907d2f3fb373b32c55be038e3c0e447a8b494a2b740a65dcd1c25d" exitCode=0 Apr 23 17:58:39.516310 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:39.516219 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lg6b8" event={"ID":"8f18ab0b-c24e-4d53-9d15-941a178305d9","Type":"ContainerDied","Data":"11372a718f907d2f3fb373b32c55be038e3c0e447a8b494a2b740a65dcd1c25d"} Apr 23 17:58:39.516503 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:39.516484 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kube-system/konnectivity-agent-4vkdl" Apr 23 17:58:39.517616 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:39.516988 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/konnectivity-agent-4vkdl" Apr 23 17:58:40.303659 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:40.303631 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-dpfbr" Apr 23 17:58:40.303845 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:40.303750 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-dpfbr" podUID="17e9a772-9316-4c67-bffe-e44ea2915f0f" Apr 23 17:58:40.341761 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:40.341728 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-nh2kn"] Apr 23 17:58:40.342566 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:40.341833 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nh2kn" Apr 23 17:58:40.342566 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:40.341943 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nh2kn" podUID="d9157db1-0537-4915-a273-5b7a482bc173" Apr 23 17:58:40.343600 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:40.343577 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-dpfbr"] Apr 23 17:58:40.518560 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:40.518531 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-dpfbr" Apr 23 17:58:40.518978 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:40.518951 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-dpfbr" podUID="17e9a772-9316-4c67-bffe-e44ea2915f0f" Apr 23 17:58:41.521648 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:41.521613 2572 generic.go:358] "Generic (PLEG): container finished" podID="8f18ab0b-c24e-4d53-9d15-941a178305d9" containerID="f2efa0dfaff08c9a3544dc63c2717fcfdf0356f4d63c0c3bf51b68c347a6a908" exitCode=0 Apr 23 17:58:41.522240 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:41.521701 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lg6b8" event={"ID":"8f18ab0b-c24e-4d53-9d15-941a178305d9","Type":"ContainerDied","Data":"f2efa0dfaff08c9a3544dc63c2717fcfdf0356f4d63c0c3bf51b68c347a6a908"} Apr 23 17:58:42.304333 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:42.304282 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nh2kn" Apr 23 17:58:42.304333 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:42.304309 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-dpfbr" Apr 23 17:58:42.304553 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:42.304427 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nh2kn" podUID="d9157db1-0537-4915-a273-5b7a482bc173" Apr 23 17:58:42.304553 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:42.304524 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-dpfbr" podUID="17e9a772-9316-4c67-bffe-e44ea2915f0f" Apr 23 17:58:43.527085 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:43.527047 2572 generic.go:358] "Generic (PLEG): container finished" podID="8f18ab0b-c24e-4d53-9d15-941a178305d9" containerID="ca97aebbbedd7401129ae1b4350701415cdf506cc6e8b1df72b6de39549f8cf5" exitCode=0 Apr 23 17:58:43.527489 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:43.527115 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lg6b8" event={"ID":"8f18ab0b-c24e-4d53-9d15-941a178305d9","Type":"ContainerDied","Data":"ca97aebbbedd7401129ae1b4350701415cdf506cc6e8b1df72b6de39549f8cf5"} Apr 23 17:58:44.303903 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:44.303870 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-dpfbr" Apr 23 17:58:44.304123 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:44.303881 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nh2kn" Apr 23 17:58:44.304123 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:44.303977 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-dpfbr" podUID="17e9a772-9316-4c67-bffe-e44ea2915f0f" Apr 23 17:58:44.304123 ip-10-0-130-162 kubenswrapper[2572]: E0423 17:58:44.304079 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nh2kn" podUID="d9157db1-0537-4915-a273-5b7a482bc173" Apr 23 17:58:46.024361 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.024312 2572 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-162.ec2.internal" event="NodeReady" Apr 23 17:58:46.024860 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.024484 2572 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Apr 23 17:58:46.071089 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.071054 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-85kpl"] Apr 23 17:58:46.099076 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.099047 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-runtime-extractor-mkvqp"] Apr 23 17:58:46.099236 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.099156 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-85kpl" Apr 23 17:58:46.102006 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.101820 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Apr 23 17:58:46.102006 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.101876 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Apr 23 17:58:46.102006 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.101948 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-nr8tq\"" Apr 23 17:58:46.113621 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.113579 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-85kpl"] Apr 23 17:58:46.113621 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.113610 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-mkvqp"] Apr 23 17:58:46.113812 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.113737 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-mkvqp" Apr 23 17:58:46.116505 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.116480 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-rbac-proxy\"" Apr 23 17:58:46.117127 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.117101 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-sa-dockercfg-2kkk6\"" Apr 23 17:58:46.117254 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.117115 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"openshift-service-ca.crt\"" Apr 23 17:58:46.117254 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.117194 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-root-ca.crt\"" Apr 23 17:58:46.117369 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.117313 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-tls\"" Apr 23 17:58:46.171920 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.171880 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-hhchc"] Apr 23 17:58:46.200364 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.200308 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-hhchc"] Apr 23 17:58:46.200515 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.200434 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-hhchc" Apr 23 17:58:46.203160 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.203105 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Apr 23 17:58:46.203160 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.203143 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Apr 23 17:58:46.203438 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.203421 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-jg9fn\"" Apr 23 17:58:46.203547 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.203523 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Apr 23 17:58:46.267418 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.267384 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-mkvqp\" (UID: \"50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb\") " pod="openshift-insights/insights-runtime-extractor-mkvqp" Apr 23 17:58:46.267571 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.267431 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwxvf\" (UniqueName: \"kubernetes.io/projected/3be8ea05-2624-4d7d-a9b5-24df2e7b7e43-kube-api-access-mwxvf\") pod \"dns-default-85kpl\" (UID: \"3be8ea05-2624-4d7d-a9b5-24df2e7b7e43\") " pod="openshift-dns/dns-default-85kpl" Apr 23 17:58:46.267571 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.267502 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hbkk\" (UniqueName: \"kubernetes.io/projected/50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb-kube-api-access-2hbkk\") pod \"insights-runtime-extractor-mkvqp\" (UID: \"50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb\") " pod="openshift-insights/insights-runtime-extractor-mkvqp" Apr 23 17:58:46.267571 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.267565 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb-crio-socket\") pod \"insights-runtime-extractor-mkvqp\" (UID: \"50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb\") " pod="openshift-insights/insights-runtime-extractor-mkvqp" Apr 23 17:58:46.267693 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.267604 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3be8ea05-2624-4d7d-a9b5-24df2e7b7e43-metrics-tls\") pod \"dns-default-85kpl\" (UID: \"3be8ea05-2624-4d7d-a9b5-24df2e7b7e43\") " pod="openshift-dns/dns-default-85kpl" Apr 23 17:58:46.267693 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.267624 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb-data-volume\") pod \"insights-runtime-extractor-mkvqp\" (UID: \"50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb\") " pod="openshift-insights/insights-runtime-extractor-mkvqp" Apr 23 17:58:46.267693 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.267647 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-mkvqp\" (UID: \"50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb\") " pod="openshift-insights/insights-runtime-extractor-mkvqp" Apr 23 17:58:46.267813 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.267718 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3be8ea05-2624-4d7d-a9b5-24df2e7b7e43-tmp-dir\") pod \"dns-default-85kpl\" (UID: \"3be8ea05-2624-4d7d-a9b5-24df2e7b7e43\") " pod="openshift-dns/dns-default-85kpl" Apr 23 17:58:46.267813 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.267761 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3be8ea05-2624-4d7d-a9b5-24df2e7b7e43-config-volume\") pod \"dns-default-85kpl\" (UID: \"3be8ea05-2624-4d7d-a9b5-24df2e7b7e43\") " pod="openshift-dns/dns-default-85kpl" Apr 23 17:58:46.304378 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.304301 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-dpfbr" Apr 23 17:58:46.304518 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.304302 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nh2kn" Apr 23 17:58:46.307799 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.307765 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 23 17:58:46.307799 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.307794 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 23 17:58:46.307982 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.307841 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-x582n\"" Apr 23 17:58:46.307982 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.307765 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-p6fds\"" Apr 23 17:58:46.308285 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.308267 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 23 17:58:46.368520 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.368490 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3be8ea05-2624-4d7d-a9b5-24df2e7b7e43-config-volume\") pod \"dns-default-85kpl\" (UID: \"3be8ea05-2624-4d7d-a9b5-24df2e7b7e43\") " pod="openshift-dns/dns-default-85kpl" Apr 23 17:58:46.368658 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.368536 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-mkvqp\" (UID: \"50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb\") " pod="openshift-insights/insights-runtime-extractor-mkvqp" Apr 23 17:58:46.368658 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.368569 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75zpv\" (UniqueName: \"kubernetes.io/projected/df81e11f-ec7c-402f-b956-c59eab2eebbf-kube-api-access-75zpv\") pod \"ingress-canary-hhchc\" (UID: \"df81e11f-ec7c-402f-b956-c59eab2eebbf\") " pod="openshift-ingress-canary/ingress-canary-hhchc" Apr 23 17:58:46.368658 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.368595 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mwxvf\" (UniqueName: \"kubernetes.io/projected/3be8ea05-2624-4d7d-a9b5-24df2e7b7e43-kube-api-access-mwxvf\") pod \"dns-default-85kpl\" (UID: \"3be8ea05-2624-4d7d-a9b5-24df2e7b7e43\") " pod="openshift-dns/dns-default-85kpl" Apr 23 17:58:46.368658 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.368635 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2hbkk\" (UniqueName: \"kubernetes.io/projected/50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb-kube-api-access-2hbkk\") pod \"insights-runtime-extractor-mkvqp\" (UID: \"50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb\") " pod="openshift-insights/insights-runtime-extractor-mkvqp" Apr 23 17:58:46.368872 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.368671 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df81e11f-ec7c-402f-b956-c59eab2eebbf-cert\") pod \"ingress-canary-hhchc\" (UID: \"df81e11f-ec7c-402f-b956-c59eab2eebbf\") " pod="openshift-ingress-canary/ingress-canary-hhchc" Apr 23 17:58:46.368872 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.368699 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb-crio-socket\") pod \"insights-runtime-extractor-mkvqp\" (UID: \"50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb\") " pod="openshift-insights/insights-runtime-extractor-mkvqp" Apr 23 17:58:46.368872 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.368751 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3be8ea05-2624-4d7d-a9b5-24df2e7b7e43-metrics-tls\") pod \"dns-default-85kpl\" (UID: \"3be8ea05-2624-4d7d-a9b5-24df2e7b7e43\") " pod="openshift-dns/dns-default-85kpl" Apr 23 17:58:46.368872 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.368775 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb-data-volume\") pod \"insights-runtime-extractor-mkvqp\" (UID: \"50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb\") " pod="openshift-insights/insights-runtime-extractor-mkvqp" Apr 23 17:58:46.368872 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.368838 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-mkvqp\" (UID: \"50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb\") " pod="openshift-insights/insights-runtime-extractor-mkvqp" Apr 23 17:58:46.368872 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.368868 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3be8ea05-2624-4d7d-a9b5-24df2e7b7e43-tmp-dir\") pod \"dns-default-85kpl\" (UID: \"3be8ea05-2624-4d7d-a9b5-24df2e7b7e43\") " pod="openshift-dns/dns-default-85kpl" Apr 23 17:58:46.369153 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.369015 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb-crio-socket\") pod \"insights-runtime-extractor-mkvqp\" (UID: \"50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb\") " pod="openshift-insights/insights-runtime-extractor-mkvqp" Apr 23 17:58:46.369153 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.369107 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3be8ea05-2624-4d7d-a9b5-24df2e7b7e43-config-volume\") pod \"dns-default-85kpl\" (UID: \"3be8ea05-2624-4d7d-a9b5-24df2e7b7e43\") " pod="openshift-dns/dns-default-85kpl" Apr 23 17:58:46.369153 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.369138 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-mkvqp\" (UID: \"50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb\") " pod="openshift-insights/insights-runtime-extractor-mkvqp" Apr 23 17:58:46.369300 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.369208 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb-data-volume\") pod \"insights-runtime-extractor-mkvqp\" (UID: \"50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb\") " pod="openshift-insights/insights-runtime-extractor-mkvqp" Apr 23 17:58:46.369300 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.369242 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3be8ea05-2624-4d7d-a9b5-24df2e7b7e43-tmp-dir\") pod \"dns-default-85kpl\" (UID: \"3be8ea05-2624-4d7d-a9b5-24df2e7b7e43\") " pod="openshift-dns/dns-default-85kpl" Apr 23 17:58:46.373482 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.373459 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3be8ea05-2624-4d7d-a9b5-24df2e7b7e43-metrics-tls\") pod \"dns-default-85kpl\" (UID: \"3be8ea05-2624-4d7d-a9b5-24df2e7b7e43\") " pod="openshift-dns/dns-default-85kpl" Apr 23 17:58:46.373605 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.373480 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-mkvqp\" (UID: \"50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb\") " pod="openshift-insights/insights-runtime-extractor-mkvqp" Apr 23 17:58:46.377660 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.377634 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwxvf\" (UniqueName: \"kubernetes.io/projected/3be8ea05-2624-4d7d-a9b5-24df2e7b7e43-kube-api-access-mwxvf\") pod \"dns-default-85kpl\" (UID: \"3be8ea05-2624-4d7d-a9b5-24df2e7b7e43\") " pod="openshift-dns/dns-default-85kpl" Apr 23 17:58:46.391565 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.391540 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hbkk\" (UniqueName: \"kubernetes.io/projected/50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb-kube-api-access-2hbkk\") pod \"insights-runtime-extractor-mkvqp\" (UID: \"50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb\") " pod="openshift-insights/insights-runtime-extractor-mkvqp" Apr 23 17:58:46.409545 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.409526 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-85kpl" Apr 23 17:58:46.424300 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.424280 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-mkvqp" Apr 23 17:58:46.469742 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.469708 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-75zpv\" (UniqueName: \"kubernetes.io/projected/df81e11f-ec7c-402f-b956-c59eab2eebbf-kube-api-access-75zpv\") pod \"ingress-canary-hhchc\" (UID: \"df81e11f-ec7c-402f-b956-c59eab2eebbf\") " pod="openshift-ingress-canary/ingress-canary-hhchc" Apr 23 17:58:46.469928 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.469767 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df81e11f-ec7c-402f-b956-c59eab2eebbf-cert\") pod \"ingress-canary-hhchc\" (UID: \"df81e11f-ec7c-402f-b956-c59eab2eebbf\") " pod="openshift-ingress-canary/ingress-canary-hhchc" Apr 23 17:58:46.472930 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.472899 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df81e11f-ec7c-402f-b956-c59eab2eebbf-cert\") pod \"ingress-canary-hhchc\" (UID: \"df81e11f-ec7c-402f-b956-c59eab2eebbf\") " pod="openshift-ingress-canary/ingress-canary-hhchc" Apr 23 17:58:46.478951 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.478919 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-75zpv\" (UniqueName: \"kubernetes.io/projected/df81e11f-ec7c-402f-b956-c59eab2eebbf-kube-api-access-75zpv\") pod \"ingress-canary-hhchc\" (UID: \"df81e11f-ec7c-402f-b956-c59eab2eebbf\") " pod="openshift-ingress-canary/ingress-canary-hhchc" Apr 23 17:58:46.511899 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.510929 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-hhchc" Apr 23 17:58:46.589589 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.589559 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-mkvqp"] Apr 23 17:58:46.592748 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.592700 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-85kpl"] Apr 23 17:58:46.594203 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:46.594171 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50f8ac27_9441_4fd8_88e9_aa3bdd1f22cb.slice/crio-2c8e588471703068d44562750ba3bc9ad6e8ff3c09502d2c5c9150381747d913 WatchSource:0}: Error finding container 2c8e588471703068d44562750ba3bc9ad6e8ff3c09502d2c5c9150381747d913: Status 404 returned error can't find the container with id 2c8e588471703068d44562750ba3bc9ad6e8ff3c09502d2c5c9150381747d913 Apr 23 17:58:46.598228 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:46.598196 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3be8ea05_2624_4d7d_a9b5_24df2e7b7e43.slice/crio-e7ab9197bdcec7f36c072fa7d12fd4b8be9a57f9816cc1b1fd5867d01f264efe WatchSource:0}: Error finding container e7ab9197bdcec7f36c072fa7d12fd4b8be9a57f9816cc1b1fd5867d01f264efe: Status 404 returned error can't find the container with id e7ab9197bdcec7f36c072fa7d12fd4b8be9a57f9816cc1b1fd5867d01f264efe Apr 23 17:58:46.657778 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.657578 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-hhchc"] Apr 23 17:58:46.660509 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:46.660477 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf81e11f_ec7c_402f_b956_c59eab2eebbf.slice/crio-5514e06f79b3c0ec1a79f34de51a3f86d2af39c1d431ca5df5d53d135c91b198 WatchSource:0}: Error finding container 5514e06f79b3c0ec1a79f34de51a3f86d2af39c1d431ca5df5d53d135c91b198: Status 404 returned error can't find the container with id 5514e06f79b3c0ec1a79f34de51a3f86d2af39c1d431ca5df5d53d135c91b198 Apr 23 17:58:46.973247 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.973165 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9157db1-0537-4915-a273-5b7a482bc173-metrics-certs\") pod \"network-metrics-daemon-nh2kn\" (UID: \"d9157db1-0537-4915-a273-5b7a482bc173\") " pod="openshift-multus/network-metrics-daemon-nh2kn" Apr 23 17:58:46.976353 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:46.976313 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9157db1-0537-4915-a273-5b7a482bc173-metrics-certs\") pod \"network-metrics-daemon-nh2kn\" (UID: \"d9157db1-0537-4915-a273-5b7a482bc173\") " pod="openshift-multus/network-metrics-daemon-nh2kn" Apr 23 17:58:47.174924 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:47.174882 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tmmx9\" (UniqueName: \"kubernetes.io/projected/17e9a772-9316-4c67-bffe-e44ea2915f0f-kube-api-access-tmmx9\") pod \"network-check-target-dpfbr\" (UID: \"17e9a772-9316-4c67-bffe-e44ea2915f0f\") " pod="openshift-network-diagnostics/network-check-target-dpfbr" Apr 23 17:58:47.178246 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:47.178220 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmmx9\" (UniqueName: \"kubernetes.io/projected/17e9a772-9316-4c67-bffe-e44ea2915f0f-kube-api-access-tmmx9\") pod \"network-check-target-dpfbr\" (UID: \"17e9a772-9316-4c67-bffe-e44ea2915f0f\") " pod="openshift-network-diagnostics/network-check-target-dpfbr" Apr 23 17:58:47.220215 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:47.220169 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-dpfbr" Apr 23 17:58:47.226154 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:47.226048 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nh2kn" Apr 23 17:58:47.536828 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:47.536774 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-hhchc" event={"ID":"df81e11f-ec7c-402f-b956-c59eab2eebbf","Type":"ContainerStarted","Data":"5514e06f79b3c0ec1a79f34de51a3f86d2af39c1d431ca5df5d53d135c91b198"} Apr 23 17:58:47.538017 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:47.537987 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-85kpl" event={"ID":"3be8ea05-2624-4d7d-a9b5-24df2e7b7e43","Type":"ContainerStarted","Data":"e7ab9197bdcec7f36c072fa7d12fd4b8be9a57f9816cc1b1fd5867d01f264efe"} Apr 23 17:58:47.539491 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:47.539463 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-mkvqp" event={"ID":"50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb","Type":"ContainerStarted","Data":"5e5a3814b81b75f94c85d5803d1a3798336bc72dcc6e4d8a11d8f4952395cef9"} Apr 23 17:58:47.539622 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:47.539497 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-mkvqp" event={"ID":"50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb","Type":"ContainerStarted","Data":"2c8e588471703068d44562750ba3bc9ad6e8ff3c09502d2c5c9150381747d913"} Apr 23 17:58:50.183939 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:50.183719 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-dpfbr"] Apr 23 17:58:50.199200 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:50.198819 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-nh2kn"] Apr 23 17:58:50.262882 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:50.262777 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17e9a772_9316_4c67_bffe_e44ea2915f0f.slice/crio-27dcf6e2afd2df168ad27a65e6ba7b45dbe5dcd0b6bf03606c7aa9034f041ea4 WatchSource:0}: Error finding container 27dcf6e2afd2df168ad27a65e6ba7b45dbe5dcd0b6bf03606c7aa9034f041ea4: Status 404 returned error can't find the container with id 27dcf6e2afd2df168ad27a65e6ba7b45dbe5dcd0b6bf03606c7aa9034f041ea4 Apr 23 17:58:50.263524 ip-10-0-130-162 kubenswrapper[2572]: W0423 17:58:50.263481 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9157db1_0537_4915_a273_5b7a482bc173.slice/crio-bf4da442e6566250fb5bcf7464b195dda914b1c0df0181d67610f4a71d03e4d8 WatchSource:0}: Error finding container bf4da442e6566250fb5bcf7464b195dda914b1c0df0181d67610f4a71d03e4d8: Status 404 returned error can't find the container with id bf4da442e6566250fb5bcf7464b195dda914b1c0df0181d67610f4a71d03e4d8 Apr 23 17:58:50.549283 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:50.549246 2572 generic.go:358] "Generic (PLEG): container finished" podID="8f18ab0b-c24e-4d53-9d15-941a178305d9" containerID="e8d560d6eb676efa803628af2091512d998b6817f19a76596a173375e418ab26" exitCode=0 Apr 23 17:58:50.549444 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:50.549332 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lg6b8" event={"ID":"8f18ab0b-c24e-4d53-9d15-941a178305d9","Type":"ContainerDied","Data":"e8d560d6eb676efa803628af2091512d998b6817f19a76596a173375e418ab26"} Apr 23 17:58:50.551338 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:50.551284 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-85kpl" event={"ID":"3be8ea05-2624-4d7d-a9b5-24df2e7b7e43","Type":"ContainerStarted","Data":"073f9704b96b8aba2278c55c4ab6b13e00a0734979c9f5974e9b6920d0db658f"} Apr 23 17:58:50.551458 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:50.551347 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-85kpl" event={"ID":"3be8ea05-2624-4d7d-a9b5-24df2e7b7e43","Type":"ContainerStarted","Data":"55cb61e98ba874093f07f124a50a1749ff6cde54a52bb9a18854827df35a7fb8"} Apr 23 17:58:50.551458 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:50.551443 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-85kpl" Apr 23 17:58:50.552620 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:50.552565 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-dpfbr" event={"ID":"17e9a772-9316-4c67-bffe-e44ea2915f0f","Type":"ContainerStarted","Data":"27dcf6e2afd2df168ad27a65e6ba7b45dbe5dcd0b6bf03606c7aa9034f041ea4"} Apr 23 17:58:50.554480 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:50.554447 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-mkvqp" event={"ID":"50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb","Type":"ContainerStarted","Data":"e4e63adfc8e8aee1b760bfd4560c85a3f7e08760b98af96a5cdc6e02279e17f4"} Apr 23 17:58:50.555453 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:50.555399 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-nh2kn" event={"ID":"d9157db1-0537-4915-a273-5b7a482bc173","Type":"ContainerStarted","Data":"bf4da442e6566250fb5bcf7464b195dda914b1c0df0181d67610f4a71d03e4d8"} Apr 23 17:58:50.588877 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:50.588823 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-85kpl" podStartSLOduration=1.1969231790000001 podStartE2EDuration="4.588804286s" podCreationTimestamp="2026-04-23 17:58:46 +0000 UTC" firstStartedPulling="2026-04-23 17:58:46.600215418 +0000 UTC m=+33.898461529" lastFinishedPulling="2026-04-23 17:58:49.992096511 +0000 UTC m=+37.290342636" observedRunningTime="2026-04-23 17:58:50.587485672 +0000 UTC m=+37.885731810" watchObservedRunningTime="2026-04-23 17:58:50.588804286 +0000 UTC m=+37.887050420" Apr 23 17:58:51.560777 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:51.560734 2572 generic.go:358] "Generic (PLEG): container finished" podID="8f18ab0b-c24e-4d53-9d15-941a178305d9" containerID="ef9d8b500678f0d69039f19747cefa06d39319d495a0e683a3d4469d7d2203a1" exitCode=0 Apr 23 17:58:51.561243 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:51.560859 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lg6b8" event={"ID":"8f18ab0b-c24e-4d53-9d15-941a178305d9","Type":"ContainerDied","Data":"ef9d8b500678f0d69039f19747cefa06d39319d495a0e683a3d4469d7d2203a1"} Apr 23 17:58:52.568001 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:52.567917 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-nh2kn" event={"ID":"d9157db1-0537-4915-a273-5b7a482bc173","Type":"ContainerStarted","Data":"1e87c25e6d78eda8de1fb2492812fa14ae3d7ac6c8359578e4ed313e222830b9"} Apr 23 17:58:52.568001 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:52.567960 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-nh2kn" event={"ID":"d9157db1-0537-4915-a273-5b7a482bc173","Type":"ContainerStarted","Data":"4b78970227b5ed040d7a9bf6487595c81707b397915bfa6d37b01582ee3f03fb"} Apr 23 17:58:52.571296 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:52.571266 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lg6b8" event={"ID":"8f18ab0b-c24e-4d53-9d15-941a178305d9","Type":"ContainerStarted","Data":"175dee79f68c3fb982d383f65ee460f6e91adbffd28d5e14f4fd2e6ffaac9213"} Apr 23 17:58:52.572732 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:52.572711 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-hhchc" event={"ID":"df81e11f-ec7c-402f-b956-c59eab2eebbf","Type":"ContainerStarted","Data":"4cdb328c5c2fadfd96f66aef61771818c705b7f224b9ac602e9d9c564b840c50"} Apr 23 17:58:52.586131 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:52.586080 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-nh2kn" podStartSLOduration=37.84073109 podStartE2EDuration="39.586064304s" podCreationTimestamp="2026-04-23 17:58:13 +0000 UTC" firstStartedPulling="2026-04-23 17:58:50.276266403 +0000 UTC m=+37.574512528" lastFinishedPulling="2026-04-23 17:58:52.021599618 +0000 UTC m=+39.319845742" observedRunningTime="2026-04-23 17:58:52.585108745 +0000 UTC m=+39.883354879" watchObservedRunningTime="2026-04-23 17:58:52.586064304 +0000 UTC m=+39.884310453" Apr 23 17:58:52.604533 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:52.604489 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-hhchc" podStartSLOduration=1.24725986 podStartE2EDuration="6.604478063s" podCreationTimestamp="2026-04-23 17:58:46 +0000 UTC" firstStartedPulling="2026-04-23 17:58:46.662650549 +0000 UTC m=+33.960896662" lastFinishedPulling="2026-04-23 17:58:52.019868743 +0000 UTC m=+39.318114865" observedRunningTime="2026-04-23 17:58:52.603692033 +0000 UTC m=+39.901938170" watchObservedRunningTime="2026-04-23 17:58:52.604478063 +0000 UTC m=+39.902724195" Apr 23 17:58:52.629098 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:52.629024 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-lg6b8" podStartSLOduration=5.510534932 podStartE2EDuration="39.6290094s" podCreationTimestamp="2026-04-23 17:58:13 +0000 UTC" firstStartedPulling="2026-04-23 17:58:15.874798374 +0000 UTC m=+3.173044497" lastFinishedPulling="2026-04-23 17:58:49.99327284 +0000 UTC m=+37.291518965" observedRunningTime="2026-04-23 17:58:52.627537187 +0000 UTC m=+39.925783320" watchObservedRunningTime="2026-04-23 17:58:52.6290094 +0000 UTC m=+39.927255537" Apr 23 17:58:54.579795 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:54.579753 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-dpfbr" event={"ID":"17e9a772-9316-4c67-bffe-e44ea2915f0f","Type":"ContainerStarted","Data":"ddd0571c70f30f48b17e64036d9d28a4e850a35dd180ee895adf881b8573e77b"} Apr 23 17:58:54.580231 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:54.579895 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-dpfbr" Apr 23 17:58:54.581462 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:54.581430 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-mkvqp" event={"ID":"50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb","Type":"ContainerStarted","Data":"3bd582d3f12a0a379469982fc8a362f6423f1ea2978c2ce04d6e51d3d0336859"} Apr 23 17:58:54.598724 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:58:54.598675 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-target-dpfbr" podStartSLOduration=37.837359974 podStartE2EDuration="41.598654412s" podCreationTimestamp="2026-04-23 17:58:13 +0000 UTC" firstStartedPulling="2026-04-23 17:58:50.27583581 +0000 UTC m=+37.574081947" lastFinishedPulling="2026-04-23 17:58:54.037130257 +0000 UTC m=+41.335376385" observedRunningTime="2026-04-23 17:58:54.596845543 +0000 UTC m=+41.895091687" watchObservedRunningTime="2026-04-23 17:58:54.598654412 +0000 UTC m=+41.896900548" Apr 23 17:59:00.563865 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:59:00.563834 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-85kpl" Apr 23 17:59:00.581946 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:59:00.581900 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-runtime-extractor-mkvqp" podStartSLOduration=7.225980872 podStartE2EDuration="14.581887103s" podCreationTimestamp="2026-04-23 17:58:46 +0000 UTC" firstStartedPulling="2026-04-23 17:58:46.673296433 +0000 UTC m=+33.971542544" lastFinishedPulling="2026-04-23 17:58:54.02920266 +0000 UTC m=+41.327448775" observedRunningTime="2026-04-23 17:58:54.617954086 +0000 UTC m=+41.916200220" watchObservedRunningTime="2026-04-23 17:59:00.581887103 +0000 UTC m=+47.880133235" Apr 23 17:59:10.531957 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:59:10.531926 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rz688" Apr 23 17:59:25.586254 ip-10-0-130-162 kubenswrapper[2572]: I0423 17:59:25.586137 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-dpfbr" Apr 23 18:07:36.955715 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:36.955676 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cpw2x2"] Apr 23 18:07:36.958579 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:36.958561 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cpw2x2" Apr 23 18:07:36.961249 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:36.961202 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Apr 23 18:07:36.961249 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:36.961224 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-njbb2\"" Apr 23 18:07:36.962415 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:36.962400 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Apr 23 18:07:36.978949 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:36.978917 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cpw2x2"] Apr 23 18:07:36.980226 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:36.980208 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/96ac9ac8-18c8-45bb-839b-8f9cce149d45-util\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cpw2x2\" (UID: \"96ac9ac8-18c8-45bb-839b-8f9cce149d45\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cpw2x2" Apr 23 18:07:36.980296 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:36.980243 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/96ac9ac8-18c8-45bb-839b-8f9cce149d45-bundle\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cpw2x2\" (UID: \"96ac9ac8-18c8-45bb-839b-8f9cce149d45\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cpw2x2" Apr 23 18:07:36.980296 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:36.980263 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l8pj\" (UniqueName: \"kubernetes.io/projected/96ac9ac8-18c8-45bb-839b-8f9cce149d45-kube-api-access-6l8pj\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cpw2x2\" (UID: \"96ac9ac8-18c8-45bb-839b-8f9cce149d45\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cpw2x2" Apr 23 18:07:37.081495 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:37.081462 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/96ac9ac8-18c8-45bb-839b-8f9cce149d45-util\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cpw2x2\" (UID: \"96ac9ac8-18c8-45bb-839b-8f9cce149d45\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cpw2x2" Apr 23 18:07:37.081673 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:37.081505 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/96ac9ac8-18c8-45bb-839b-8f9cce149d45-bundle\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cpw2x2\" (UID: \"96ac9ac8-18c8-45bb-839b-8f9cce149d45\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cpw2x2" Apr 23 18:07:37.081673 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:37.081526 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6l8pj\" (UniqueName: \"kubernetes.io/projected/96ac9ac8-18c8-45bb-839b-8f9cce149d45-kube-api-access-6l8pj\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cpw2x2\" (UID: \"96ac9ac8-18c8-45bb-839b-8f9cce149d45\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cpw2x2" Apr 23 18:07:37.081853 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:37.081831 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/96ac9ac8-18c8-45bb-839b-8f9cce149d45-util\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cpw2x2\" (UID: \"96ac9ac8-18c8-45bb-839b-8f9cce149d45\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cpw2x2" Apr 23 18:07:37.081924 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:37.081891 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/96ac9ac8-18c8-45bb-839b-8f9cce149d45-bundle\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cpw2x2\" (UID: \"96ac9ac8-18c8-45bb-839b-8f9cce149d45\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cpw2x2" Apr 23 18:07:37.091960 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:37.091935 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l8pj\" (UniqueName: \"kubernetes.io/projected/96ac9ac8-18c8-45bb-839b-8f9cce149d45-kube-api-access-6l8pj\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cpw2x2\" (UID: \"96ac9ac8-18c8-45bb-839b-8f9cce149d45\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cpw2x2" Apr 23 18:07:37.266957 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:37.266927 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cpw2x2" Apr 23 18:07:37.379701 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:37.379673 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cpw2x2"] Apr 23 18:07:37.384267 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:37.384252 2572 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 18:07:37.861196 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:37.861154 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cpw2x2" event={"ID":"96ac9ac8-18c8-45bb-839b-8f9cce149d45","Type":"ContainerStarted","Data":"95e20b2564eb681a7a4808f74ed3731ae081ffd22d396618185bfd43ff574a5e"} Apr 23 18:07:42.877507 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:42.877467 2572 generic.go:358] "Generic (PLEG): container finished" podID="96ac9ac8-18c8-45bb-839b-8f9cce149d45" containerID="fdea1a0df260433cc57e2521985fbbd633a74c29b29aa75676c40d0b3cc90f6d" exitCode=0 Apr 23 18:07:42.877934 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:42.877556 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cpw2x2" event={"ID":"96ac9ac8-18c8-45bb-839b-8f9cce149d45","Type":"ContainerDied","Data":"fdea1a0df260433cc57e2521985fbbd633a74c29b29aa75676c40d0b3cc90f6d"} Apr 23 18:07:45.888621 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:45.888591 2572 generic.go:358] "Generic (PLEG): container finished" podID="96ac9ac8-18c8-45bb-839b-8f9cce149d45" containerID="6c61463211b19eb3b3912bddfb08b48e03734dba33849f0ff35617a7cd741e2d" exitCode=0 Apr 23 18:07:45.889000 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:45.888638 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cpw2x2" event={"ID":"96ac9ac8-18c8-45bb-839b-8f9cce149d45","Type":"ContainerDied","Data":"6c61463211b19eb3b3912bddfb08b48e03734dba33849f0ff35617a7cd741e2d"} Apr 23 18:07:52.910432 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:52.910395 2572 generic.go:358] "Generic (PLEG): container finished" podID="96ac9ac8-18c8-45bb-839b-8f9cce149d45" containerID="ba75e7c7b58437d553e63448380c9466aea819af62c183e8475e1bca5fbf0227" exitCode=0 Apr 23 18:07:52.910833 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:52.910464 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cpw2x2" event={"ID":"96ac9ac8-18c8-45bb-839b-8f9cce149d45","Type":"ContainerDied","Data":"ba75e7c7b58437d553e63448380c9466aea819af62c183e8475e1bca5fbf0227"} Apr 23 18:07:54.032402 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:54.032376 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cpw2x2" Apr 23 18:07:54.094757 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:54.094731 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6l8pj\" (UniqueName: \"kubernetes.io/projected/96ac9ac8-18c8-45bb-839b-8f9cce149d45-kube-api-access-6l8pj\") pod \"96ac9ac8-18c8-45bb-839b-8f9cce149d45\" (UID: \"96ac9ac8-18c8-45bb-839b-8f9cce149d45\") " Apr 23 18:07:54.094757 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:54.094760 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/96ac9ac8-18c8-45bb-839b-8f9cce149d45-bundle\") pod \"96ac9ac8-18c8-45bb-839b-8f9cce149d45\" (UID: \"96ac9ac8-18c8-45bb-839b-8f9cce149d45\") " Apr 23 18:07:54.094907 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:54.094790 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/96ac9ac8-18c8-45bb-839b-8f9cce149d45-util\") pod \"96ac9ac8-18c8-45bb-839b-8f9cce149d45\" (UID: \"96ac9ac8-18c8-45bb-839b-8f9cce149d45\") " Apr 23 18:07:54.095340 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:54.095280 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96ac9ac8-18c8-45bb-839b-8f9cce149d45-bundle" (OuterVolumeSpecName: "bundle") pod "96ac9ac8-18c8-45bb-839b-8f9cce149d45" (UID: "96ac9ac8-18c8-45bb-839b-8f9cce149d45"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 23 18:07:54.096846 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:54.096819 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96ac9ac8-18c8-45bb-839b-8f9cce149d45-kube-api-access-6l8pj" (OuterVolumeSpecName: "kube-api-access-6l8pj") pod "96ac9ac8-18c8-45bb-839b-8f9cce149d45" (UID: "96ac9ac8-18c8-45bb-839b-8f9cce149d45"). InnerVolumeSpecName "kube-api-access-6l8pj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:07:54.099528 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:54.099510 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96ac9ac8-18c8-45bb-839b-8f9cce149d45-util" (OuterVolumeSpecName: "util") pod "96ac9ac8-18c8-45bb-839b-8f9cce149d45" (UID: "96ac9ac8-18c8-45bb-839b-8f9cce149d45"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 23 18:07:54.195474 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:54.195407 2572 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6l8pj\" (UniqueName: \"kubernetes.io/projected/96ac9ac8-18c8-45bb-839b-8f9cce149d45-kube-api-access-6l8pj\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:07:54.195474 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:54.195437 2572 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/96ac9ac8-18c8-45bb-839b-8f9cce149d45-bundle\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:07:54.195474 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:54.195446 2572 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/96ac9ac8-18c8-45bb-839b-8f9cce149d45-util\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:07:54.916857 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:54.916825 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cpw2x2" event={"ID":"96ac9ac8-18c8-45bb-839b-8f9cce149d45","Type":"ContainerDied","Data":"95e20b2564eb681a7a4808f74ed3731ae081ffd22d396618185bfd43ff574a5e"} Apr 23 18:07:54.916857 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:54.916849 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29cpw2x2" Apr 23 18:07:54.916857 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:54.916860 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95e20b2564eb681a7a4808f74ed3731ae081ffd22d396618185bfd43ff574a5e" Apr 23 18:07:59.083882 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:59.083847 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-8zn5s"] Apr 23 18:07:59.084266 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:59.084039 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="96ac9ac8-18c8-45bb-839b-8f9cce149d45" containerName="util" Apr 23 18:07:59.084266 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:59.084049 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="96ac9ac8-18c8-45bb-839b-8f9cce149d45" containerName="util" Apr 23 18:07:59.084266 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:59.084061 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="96ac9ac8-18c8-45bb-839b-8f9cce149d45" containerName="pull" Apr 23 18:07:59.084266 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:59.084066 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="96ac9ac8-18c8-45bb-839b-8f9cce149d45" containerName="pull" Apr 23 18:07:59.084266 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:59.084076 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="96ac9ac8-18c8-45bb-839b-8f9cce149d45" containerName="extract" Apr 23 18:07:59.084266 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:59.084081 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="96ac9ac8-18c8-45bb-839b-8f9cce149d45" containerName="extract" Apr 23 18:07:59.084266 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:59.084115 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="96ac9ac8-18c8-45bb-839b-8f9cce149d45" containerName="extract" Apr 23 18:07:59.109376 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:59.109353 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-8zn5s"] Apr 23 18:07:59.109497 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:59.109466 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-8zn5s" Apr 23 18:07:59.112236 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:59.112208 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"kedaorg-certs\"" Apr 23 18:07:59.112236 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:59.112230 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"kube-root-ca.crt\"" Apr 23 18:07:59.112452 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:59.112274 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"openshift-service-ca.crt\"" Apr 23 18:07:59.112452 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:59.112290 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"custom-metrics-autoscaler-operator-dockercfg-qfs6r\"" Apr 23 18:07:59.226722 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:59.226681 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/secret/acaf7d9b-496a-4814-8849-c81ffc6c2721-certificates\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-8zn5s\" (UID: \"acaf7d9b-496a-4814-8849-c81ffc6c2721\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-8zn5s" Apr 23 18:07:59.226882 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:59.226733 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thfcr\" (UniqueName: \"kubernetes.io/projected/acaf7d9b-496a-4814-8849-c81ffc6c2721-kube-api-access-thfcr\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-8zn5s\" (UID: \"acaf7d9b-496a-4814-8849-c81ffc6c2721\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-8zn5s" Apr 23 18:07:59.327662 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:59.327632 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/secret/acaf7d9b-496a-4814-8849-c81ffc6c2721-certificates\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-8zn5s\" (UID: \"acaf7d9b-496a-4814-8849-c81ffc6c2721\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-8zn5s" Apr 23 18:07:59.327875 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:59.327671 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-thfcr\" (UniqueName: \"kubernetes.io/projected/acaf7d9b-496a-4814-8849-c81ffc6c2721-kube-api-access-thfcr\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-8zn5s\" (UID: \"acaf7d9b-496a-4814-8849-c81ffc6c2721\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-8zn5s" Apr 23 18:07:59.329996 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:59.329976 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/secret/acaf7d9b-496a-4814-8849-c81ffc6c2721-certificates\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-8zn5s\" (UID: \"acaf7d9b-496a-4814-8849-c81ffc6c2721\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-8zn5s" Apr 23 18:07:59.336030 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:59.335982 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-thfcr\" (UniqueName: \"kubernetes.io/projected/acaf7d9b-496a-4814-8849-c81ffc6c2721-kube-api-access-thfcr\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-8zn5s\" (UID: \"acaf7d9b-496a-4814-8849-c81ffc6c2721\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-8zn5s" Apr 23 18:07:59.419624 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:59.419595 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-8zn5s" Apr 23 18:07:59.556826 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:59.556804 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-8zn5s"] Apr 23 18:07:59.559246 ip-10-0-130-162 kubenswrapper[2572]: W0423 18:07:59.559217 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podacaf7d9b_496a_4814_8849_c81ffc6c2721.slice/crio-8c0fda07a7297af397e04310e234cfa796e45f28eb5c25daf48f7e52234f399e WatchSource:0}: Error finding container 8c0fda07a7297af397e04310e234cfa796e45f28eb5c25daf48f7e52234f399e: Status 404 returned error can't find the container with id 8c0fda07a7297af397e04310e234cfa796e45f28eb5c25daf48f7e52234f399e Apr 23 18:07:59.930114 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:07:59.930075 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-8zn5s" event={"ID":"acaf7d9b-496a-4814-8849-c81ffc6c2721","Type":"ContainerStarted","Data":"8c0fda07a7297af397e04310e234cfa796e45f28eb5c25daf48f7e52234f399e"} Apr 23 18:08:03.901073 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:03.901040 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/keda-operator-ffbb595cb-t52dh"] Apr 23 18:08:03.920512 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:03.920477 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-operator-ffbb595cb-t52dh"] Apr 23 18:08:03.920630 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:03.920599 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-operator-ffbb595cb-t52dh" Apr 23 18:08:03.923334 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:03.923296 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-operator-certs\"" Apr 23 18:08:03.923452 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:03.923337 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"keda-ocp-cabundle\"" Apr 23 18:08:03.923452 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:03.923297 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-operator-dockercfg-xxm6h\"" Apr 23 18:08:03.941886 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:03.941858 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-8zn5s" event={"ID":"acaf7d9b-496a-4814-8849-c81ffc6c2721","Type":"ContainerStarted","Data":"bf36ade337f38ca360dd4ded5aa3ddbdbb122685e964e47c3e4ac1148b6539a7"} Apr 23 18:08:03.942071 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:03.942057 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-8zn5s" Apr 23 18:08:03.963655 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:03.963631 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/2d361232-05d8-4cb5-a2ff-ebc421268dc8-certificates\") pod \"keda-operator-ffbb595cb-t52dh\" (UID: \"2d361232-05d8-4cb5-a2ff-ebc421268dc8\") " pod="openshift-keda/keda-operator-ffbb595cb-t52dh" Apr 23 18:08:03.963757 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:03.963664 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cabundle0\" (UniqueName: \"kubernetes.io/configmap/2d361232-05d8-4cb5-a2ff-ebc421268dc8-cabundle0\") pod \"keda-operator-ffbb595cb-t52dh\" (UID: \"2d361232-05d8-4cb5-a2ff-ebc421268dc8\") " pod="openshift-keda/keda-operator-ffbb595cb-t52dh" Apr 23 18:08:03.963757 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:03.963728 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqw4s\" (UniqueName: \"kubernetes.io/projected/2d361232-05d8-4cb5-a2ff-ebc421268dc8-kube-api-access-wqw4s\") pod \"keda-operator-ffbb595cb-t52dh\" (UID: \"2d361232-05d8-4cb5-a2ff-ebc421268dc8\") " pod="openshift-keda/keda-operator-ffbb595cb-t52dh" Apr 23 18:08:03.964550 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:03.964509 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-8zn5s" podStartSLOduration=1.17998729 podStartE2EDuration="4.964498823s" podCreationTimestamp="2026-04-23 18:07:59 +0000 UTC" firstStartedPulling="2026-04-23 18:07:59.561126419 +0000 UTC m=+586.859372530" lastFinishedPulling="2026-04-23 18:08:03.345637953 +0000 UTC m=+590.643884063" observedRunningTime="2026-04-23 18:08:03.963394997 +0000 UTC m=+591.261641130" watchObservedRunningTime="2026-04-23 18:08:03.964498823 +0000 UTC m=+591.262744956" Apr 23 18:08:04.065077 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:04.065046 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wqw4s\" (UniqueName: \"kubernetes.io/projected/2d361232-05d8-4cb5-a2ff-ebc421268dc8-kube-api-access-wqw4s\") pod \"keda-operator-ffbb595cb-t52dh\" (UID: \"2d361232-05d8-4cb5-a2ff-ebc421268dc8\") " pod="openshift-keda/keda-operator-ffbb595cb-t52dh" Apr 23 18:08:04.065205 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:04.065114 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/2d361232-05d8-4cb5-a2ff-ebc421268dc8-certificates\") pod \"keda-operator-ffbb595cb-t52dh\" (UID: \"2d361232-05d8-4cb5-a2ff-ebc421268dc8\") " pod="openshift-keda/keda-operator-ffbb595cb-t52dh" Apr 23 18:08:04.065205 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:04.065163 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cabundle0\" (UniqueName: \"kubernetes.io/configmap/2d361232-05d8-4cb5-a2ff-ebc421268dc8-cabundle0\") pod \"keda-operator-ffbb595cb-t52dh\" (UID: \"2d361232-05d8-4cb5-a2ff-ebc421268dc8\") " pod="openshift-keda/keda-operator-ffbb595cb-t52dh" Apr 23 18:08:04.065291 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:08:04.065268 2572 projected.go:264] Couldn't get secret openshift-keda/keda-operator-certs: secret "keda-operator-certs" not found Apr 23 18:08:04.065291 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:08:04.065288 2572 secret.go:281] references non-existent secret key: ca.crt Apr 23 18:08:04.065374 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:08:04.065298 2572 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 23 18:08:04.065374 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:08:04.065313 2572 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-t52dh: [secret "keda-operator-certs" not found, references non-existent secret key: ca.crt] Apr 23 18:08:04.065584 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:08:04.065381 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2d361232-05d8-4cb5-a2ff-ebc421268dc8-certificates podName:2d361232-05d8-4cb5-a2ff-ebc421268dc8 nodeName:}" failed. No retries permitted until 2026-04-23 18:08:04.565362868 +0000 UTC m=+591.863608979 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/2d361232-05d8-4cb5-a2ff-ebc421268dc8-certificates") pod "keda-operator-ffbb595cb-t52dh" (UID: "2d361232-05d8-4cb5-a2ff-ebc421268dc8") : [secret "keda-operator-certs" not found, references non-existent secret key: ca.crt] Apr 23 18:08:04.065883 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:04.065853 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cabundle0\" (UniqueName: \"kubernetes.io/configmap/2d361232-05d8-4cb5-a2ff-ebc421268dc8-cabundle0\") pod \"keda-operator-ffbb595cb-t52dh\" (UID: \"2d361232-05d8-4cb5-a2ff-ebc421268dc8\") " pod="openshift-keda/keda-operator-ffbb595cb-t52dh" Apr 23 18:08:04.076501 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:04.076474 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqw4s\" (UniqueName: \"kubernetes.io/projected/2d361232-05d8-4cb5-a2ff-ebc421268dc8-kube-api-access-wqw4s\") pod \"keda-operator-ffbb595cb-t52dh\" (UID: \"2d361232-05d8-4cb5-a2ff-ebc421268dc8\") " pod="openshift-keda/keda-operator-ffbb595cb-t52dh" Apr 23 18:08:04.182667 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:04.182575 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/keda-metrics-apiserver-7c9f485588-b6rqz"] Apr 23 18:08:04.211366 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:04.211316 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-metrics-apiserver-7c9f485588-b6rqz"] Apr 23 18:08:04.211511 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:04.211385 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b6rqz" Apr 23 18:08:04.215091 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:04.215068 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-metrics-apiserver-certs\"" Apr 23 18:08:04.267292 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:04.267267 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvxxl\" (UniqueName: \"kubernetes.io/projected/031646a0-9199-47f5-a4de-d800c3c34347-kube-api-access-cvxxl\") pod \"keda-metrics-apiserver-7c9f485588-b6rqz\" (UID: \"031646a0-9199-47f5-a4de-d800c3c34347\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b6rqz" Apr 23 18:08:04.267443 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:04.267344 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/031646a0-9199-47f5-a4de-d800c3c34347-certificates\") pod \"keda-metrics-apiserver-7c9f485588-b6rqz\" (UID: \"031646a0-9199-47f5-a4de-d800c3c34347\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b6rqz" Apr 23 18:08:04.267443 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:04.267384 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"temp-vol\" (UniqueName: \"kubernetes.io/empty-dir/031646a0-9199-47f5-a4de-d800c3c34347-temp-vol\") pod \"keda-metrics-apiserver-7c9f485588-b6rqz\" (UID: \"031646a0-9199-47f5-a4de-d800c3c34347\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b6rqz" Apr 23 18:08:04.368256 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:04.368221 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cvxxl\" (UniqueName: \"kubernetes.io/projected/031646a0-9199-47f5-a4de-d800c3c34347-kube-api-access-cvxxl\") pod \"keda-metrics-apiserver-7c9f485588-b6rqz\" (UID: \"031646a0-9199-47f5-a4de-d800c3c34347\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b6rqz" Apr 23 18:08:04.368470 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:04.368340 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/031646a0-9199-47f5-a4de-d800c3c34347-certificates\") pod \"keda-metrics-apiserver-7c9f485588-b6rqz\" (UID: \"031646a0-9199-47f5-a4de-d800c3c34347\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b6rqz" Apr 23 18:08:04.368470 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:08:04.368417 2572 secret.go:281] references non-existent secret key: tls.crt Apr 23 18:08:04.368470 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:08:04.368428 2572 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 23 18:08:04.368470 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:08:04.368445 2572 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-b6rqz: references non-existent secret key: tls.crt Apr 23 18:08:04.368673 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:08:04.368496 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/031646a0-9199-47f5-a4de-d800c3c34347-certificates podName:031646a0-9199-47f5-a4de-d800c3c34347 nodeName:}" failed. No retries permitted until 2026-04-23 18:08:04.868483358 +0000 UTC m=+592.166729470 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/031646a0-9199-47f5-a4de-d800c3c34347-certificates") pod "keda-metrics-apiserver-7c9f485588-b6rqz" (UID: "031646a0-9199-47f5-a4de-d800c3c34347") : references non-existent secret key: tls.crt Apr 23 18:08:04.368673 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:04.368488 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"temp-vol\" (UniqueName: \"kubernetes.io/empty-dir/031646a0-9199-47f5-a4de-d800c3c34347-temp-vol\") pod \"keda-metrics-apiserver-7c9f485588-b6rqz\" (UID: \"031646a0-9199-47f5-a4de-d800c3c34347\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b6rqz" Apr 23 18:08:04.368827 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:04.368809 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"temp-vol\" (UniqueName: \"kubernetes.io/empty-dir/031646a0-9199-47f5-a4de-d800c3c34347-temp-vol\") pod \"keda-metrics-apiserver-7c9f485588-b6rqz\" (UID: \"031646a0-9199-47f5-a4de-d800c3c34347\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b6rqz" Apr 23 18:08:04.378240 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:04.378214 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvxxl\" (UniqueName: \"kubernetes.io/projected/031646a0-9199-47f5-a4de-d800c3c34347-kube-api-access-cvxxl\") pod \"keda-metrics-apiserver-7c9f485588-b6rqz\" (UID: \"031646a0-9199-47f5-a4de-d800c3c34347\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b6rqz" Apr 23 18:08:04.476271 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:04.476180 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/keda-admission-cf49989db-7lsv5"] Apr 23 18:08:04.490333 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:04.490293 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-admission-cf49989db-7lsv5"] Apr 23 18:08:04.490479 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:04.490433 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-admission-cf49989db-7lsv5" Apr 23 18:08:04.493201 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:04.493181 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-admission-webhooks-certs\"" Apr 23 18:08:04.570074 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:04.570046 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/2d361232-05d8-4cb5-a2ff-ebc421268dc8-certificates\") pod \"keda-operator-ffbb595cb-t52dh\" (UID: \"2d361232-05d8-4cb5-a2ff-ebc421268dc8\") " pod="openshift-keda/keda-operator-ffbb595cb-t52dh" Apr 23 18:08:04.570214 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:04.570083 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/6f6d037f-dca5-44e0-83cf-4d7b5b0508f0-certificates\") pod \"keda-admission-cf49989db-7lsv5\" (UID: \"6f6d037f-dca5-44e0-83cf-4d7b5b0508f0\") " pod="openshift-keda/keda-admission-cf49989db-7lsv5" Apr 23 18:08:04.570214 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:04.570102 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcq9c\" (UniqueName: \"kubernetes.io/projected/6f6d037f-dca5-44e0-83cf-4d7b5b0508f0-kube-api-access-fcq9c\") pod \"keda-admission-cf49989db-7lsv5\" (UID: \"6f6d037f-dca5-44e0-83cf-4d7b5b0508f0\") " pod="openshift-keda/keda-admission-cf49989db-7lsv5" Apr 23 18:08:04.570214 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:08:04.570180 2572 secret.go:281] references non-existent secret key: ca.crt Apr 23 18:08:04.570214 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:08:04.570202 2572 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 23 18:08:04.570214 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:08:04.570211 2572 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-t52dh: references non-existent secret key: ca.crt Apr 23 18:08:04.570405 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:08:04.570269 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2d361232-05d8-4cb5-a2ff-ebc421268dc8-certificates podName:2d361232-05d8-4cb5-a2ff-ebc421268dc8 nodeName:}" failed. No retries permitted until 2026-04-23 18:08:05.570252695 +0000 UTC m=+592.868498806 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/2d361232-05d8-4cb5-a2ff-ebc421268dc8-certificates") pod "keda-operator-ffbb595cb-t52dh" (UID: "2d361232-05d8-4cb5-a2ff-ebc421268dc8") : references non-existent secret key: ca.crt Apr 23 18:08:04.670451 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:04.670419 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/6f6d037f-dca5-44e0-83cf-4d7b5b0508f0-certificates\") pod \"keda-admission-cf49989db-7lsv5\" (UID: \"6f6d037f-dca5-44e0-83cf-4d7b5b0508f0\") " pod="openshift-keda/keda-admission-cf49989db-7lsv5" Apr 23 18:08:04.670451 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:04.670450 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fcq9c\" (UniqueName: \"kubernetes.io/projected/6f6d037f-dca5-44e0-83cf-4d7b5b0508f0-kube-api-access-fcq9c\") pod \"keda-admission-cf49989db-7lsv5\" (UID: \"6f6d037f-dca5-44e0-83cf-4d7b5b0508f0\") " pod="openshift-keda/keda-admission-cf49989db-7lsv5" Apr 23 18:08:04.670627 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:08:04.670563 2572 projected.go:264] Couldn't get secret openshift-keda/keda-admission-webhooks-certs: secret "keda-admission-webhooks-certs" not found Apr 23 18:08:04.670627 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:08:04.670585 2572 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-admission-cf49989db-7lsv5: secret "keda-admission-webhooks-certs" not found Apr 23 18:08:04.670704 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:08:04.670640 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6f6d037f-dca5-44e0-83cf-4d7b5b0508f0-certificates podName:6f6d037f-dca5-44e0-83cf-4d7b5b0508f0 nodeName:}" failed. No retries permitted until 2026-04-23 18:08:05.170625512 +0000 UTC m=+592.468871623 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/6f6d037f-dca5-44e0-83cf-4d7b5b0508f0-certificates") pod "keda-admission-cf49989db-7lsv5" (UID: "6f6d037f-dca5-44e0-83cf-4d7b5b0508f0") : secret "keda-admission-webhooks-certs" not found Apr 23 18:08:04.681912 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:04.681891 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcq9c\" (UniqueName: \"kubernetes.io/projected/6f6d037f-dca5-44e0-83cf-4d7b5b0508f0-kube-api-access-fcq9c\") pod \"keda-admission-cf49989db-7lsv5\" (UID: \"6f6d037f-dca5-44e0-83cf-4d7b5b0508f0\") " pod="openshift-keda/keda-admission-cf49989db-7lsv5" Apr 23 18:08:04.871861 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:04.871828 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/031646a0-9199-47f5-a4de-d800c3c34347-certificates\") pod \"keda-metrics-apiserver-7c9f485588-b6rqz\" (UID: \"031646a0-9199-47f5-a4de-d800c3c34347\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b6rqz" Apr 23 18:08:04.872021 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:08:04.871969 2572 secret.go:281] references non-existent secret key: tls.crt Apr 23 18:08:04.872021 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:08:04.871986 2572 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 23 18:08:04.872021 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:08:04.872003 2572 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-b6rqz: references non-existent secret key: tls.crt Apr 23 18:08:04.872118 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:08:04.872053 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/031646a0-9199-47f5-a4de-d800c3c34347-certificates podName:031646a0-9199-47f5-a4de-d800c3c34347 nodeName:}" failed. No retries permitted until 2026-04-23 18:08:05.872040377 +0000 UTC m=+593.170286488 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/031646a0-9199-47f5-a4de-d800c3c34347-certificates") pod "keda-metrics-apiserver-7c9f485588-b6rqz" (UID: "031646a0-9199-47f5-a4de-d800c3c34347") : references non-existent secret key: tls.crt Apr 23 18:08:05.175373 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:05.175251 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/6f6d037f-dca5-44e0-83cf-4d7b5b0508f0-certificates\") pod \"keda-admission-cf49989db-7lsv5\" (UID: \"6f6d037f-dca5-44e0-83cf-4d7b5b0508f0\") " pod="openshift-keda/keda-admission-cf49989db-7lsv5" Apr 23 18:08:05.177755 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:05.177729 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/6f6d037f-dca5-44e0-83cf-4d7b5b0508f0-certificates\") pod \"keda-admission-cf49989db-7lsv5\" (UID: \"6f6d037f-dca5-44e0-83cf-4d7b5b0508f0\") " pod="openshift-keda/keda-admission-cf49989db-7lsv5" Apr 23 18:08:05.400495 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:05.400458 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-admission-cf49989db-7lsv5" Apr 23 18:08:05.518061 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:05.518031 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-admission-cf49989db-7lsv5"] Apr 23 18:08:05.521058 ip-10-0-130-162 kubenswrapper[2572]: W0423 18:08:05.521019 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f6d037f_dca5_44e0_83cf_4d7b5b0508f0.slice/crio-bd8c7a00dc246f3b64f3fc70cf023db62712bc610f834b41cee393a726dbb915 WatchSource:0}: Error finding container bd8c7a00dc246f3b64f3fc70cf023db62712bc610f834b41cee393a726dbb915: Status 404 returned error can't find the container with id bd8c7a00dc246f3b64f3fc70cf023db62712bc610f834b41cee393a726dbb915 Apr 23 18:08:05.579268 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:05.579246 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/2d361232-05d8-4cb5-a2ff-ebc421268dc8-certificates\") pod \"keda-operator-ffbb595cb-t52dh\" (UID: \"2d361232-05d8-4cb5-a2ff-ebc421268dc8\") " pod="openshift-keda/keda-operator-ffbb595cb-t52dh" Apr 23 18:08:05.579395 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:08:05.579379 2572 secret.go:281] references non-existent secret key: ca.crt Apr 23 18:08:05.579441 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:08:05.579394 2572 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 23 18:08:05.579441 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:08:05.579403 2572 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-t52dh: references non-existent secret key: ca.crt Apr 23 18:08:05.579503 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:08:05.579456 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2d361232-05d8-4cb5-a2ff-ebc421268dc8-certificates podName:2d361232-05d8-4cb5-a2ff-ebc421268dc8 nodeName:}" failed. No retries permitted until 2026-04-23 18:08:07.579442934 +0000 UTC m=+594.877689044 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/2d361232-05d8-4cb5-a2ff-ebc421268dc8-certificates") pod "keda-operator-ffbb595cb-t52dh" (UID: "2d361232-05d8-4cb5-a2ff-ebc421268dc8") : references non-existent secret key: ca.crt Apr 23 18:08:05.880901 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:05.880863 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/031646a0-9199-47f5-a4de-d800c3c34347-certificates\") pod \"keda-metrics-apiserver-7c9f485588-b6rqz\" (UID: \"031646a0-9199-47f5-a4de-d800c3c34347\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b6rqz" Apr 23 18:08:05.881064 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:08:05.881028 2572 secret.go:281] references non-existent secret key: tls.crt Apr 23 18:08:05.881064 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:08:05.881051 2572 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 23 18:08:05.881181 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:08:05.881074 2572 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-b6rqz: references non-existent secret key: tls.crt Apr 23 18:08:05.881181 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:08:05.881139 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/031646a0-9199-47f5-a4de-d800c3c34347-certificates podName:031646a0-9199-47f5-a4de-d800c3c34347 nodeName:}" failed. No retries permitted until 2026-04-23 18:08:07.881120402 +0000 UTC m=+595.179366516 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/031646a0-9199-47f5-a4de-d800c3c34347-certificates") pod "keda-metrics-apiserver-7c9f485588-b6rqz" (UID: "031646a0-9199-47f5-a4de-d800c3c34347") : references non-existent secret key: tls.crt Apr 23 18:08:05.948074 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:05.948033 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-admission-cf49989db-7lsv5" event={"ID":"6f6d037f-dca5-44e0-83cf-4d7b5b0508f0","Type":"ContainerStarted","Data":"bd8c7a00dc246f3b64f3fc70cf023db62712bc610f834b41cee393a726dbb915"} Apr 23 18:08:06.953156 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:06.953065 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-admission-cf49989db-7lsv5" event={"ID":"6f6d037f-dca5-44e0-83cf-4d7b5b0508f0","Type":"ContainerStarted","Data":"2881824d00457dd1b8cecf6b6996e1f8c7b05f5f85790ec3dd877a8aa96b9b6e"} Apr 23 18:08:06.953551 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:06.953165 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-admission-cf49989db-7lsv5" Apr 23 18:08:06.969654 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:06.969615 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/keda-admission-cf49989db-7lsv5" podStartSLOduration=1.816728044 podStartE2EDuration="2.969603957s" podCreationTimestamp="2026-04-23 18:08:04 +0000 UTC" firstStartedPulling="2026-04-23 18:08:05.522466747 +0000 UTC m=+592.820712859" lastFinishedPulling="2026-04-23 18:08:06.675342654 +0000 UTC m=+593.973588772" observedRunningTime="2026-04-23 18:08:06.968540312 +0000 UTC m=+594.266786446" watchObservedRunningTime="2026-04-23 18:08:06.969603957 +0000 UTC m=+594.267850089" Apr 23 18:08:07.594635 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:07.594585 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/2d361232-05d8-4cb5-a2ff-ebc421268dc8-certificates\") pod \"keda-operator-ffbb595cb-t52dh\" (UID: \"2d361232-05d8-4cb5-a2ff-ebc421268dc8\") " pod="openshift-keda/keda-operator-ffbb595cb-t52dh" Apr 23 18:08:07.594815 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:08:07.594709 2572 secret.go:281] references non-existent secret key: ca.crt Apr 23 18:08:07.594815 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:08:07.594721 2572 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 23 18:08:07.594815 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:08:07.594729 2572 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-t52dh: references non-existent secret key: ca.crt Apr 23 18:08:07.594815 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:08:07.594775 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2d361232-05d8-4cb5-a2ff-ebc421268dc8-certificates podName:2d361232-05d8-4cb5-a2ff-ebc421268dc8 nodeName:}" failed. No retries permitted until 2026-04-23 18:08:11.594762828 +0000 UTC m=+598.893008938 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/2d361232-05d8-4cb5-a2ff-ebc421268dc8-certificates") pod "keda-operator-ffbb595cb-t52dh" (UID: "2d361232-05d8-4cb5-a2ff-ebc421268dc8") : references non-existent secret key: ca.crt Apr 23 18:08:07.897736 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:07.897641 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/031646a0-9199-47f5-a4de-d800c3c34347-certificates\") pod \"keda-metrics-apiserver-7c9f485588-b6rqz\" (UID: \"031646a0-9199-47f5-a4de-d800c3c34347\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b6rqz" Apr 23 18:08:07.897912 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:08:07.897811 2572 secret.go:281] references non-existent secret key: tls.crt Apr 23 18:08:07.897912 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:08:07.897835 2572 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 23 18:08:07.897912 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:08:07.897857 2572 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-b6rqz: references non-existent secret key: tls.crt Apr 23 18:08:07.898060 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:08:07.897924 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/031646a0-9199-47f5-a4de-d800c3c34347-certificates podName:031646a0-9199-47f5-a4de-d800c3c34347 nodeName:}" failed. No retries permitted until 2026-04-23 18:08:11.897902793 +0000 UTC m=+599.196148919 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/031646a0-9199-47f5-a4de-d800c3c34347-certificates") pod "keda-metrics-apiserver-7c9f485588-b6rqz" (UID: "031646a0-9199-47f5-a4de-d800c3c34347") : references non-existent secret key: tls.crt Apr 23 18:08:11.628156 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:11.628111 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/2d361232-05d8-4cb5-a2ff-ebc421268dc8-certificates\") pod \"keda-operator-ffbb595cb-t52dh\" (UID: \"2d361232-05d8-4cb5-a2ff-ebc421268dc8\") " pod="openshift-keda/keda-operator-ffbb595cb-t52dh" Apr 23 18:08:11.630569 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:11.630544 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/2d361232-05d8-4cb5-a2ff-ebc421268dc8-certificates\") pod \"keda-operator-ffbb595cb-t52dh\" (UID: \"2d361232-05d8-4cb5-a2ff-ebc421268dc8\") " pod="openshift-keda/keda-operator-ffbb595cb-t52dh" Apr 23 18:08:11.730860 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:11.730814 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-operator-ffbb595cb-t52dh" Apr 23 18:08:11.845217 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:11.845194 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-operator-ffbb595cb-t52dh"] Apr 23 18:08:11.847617 ip-10-0-130-162 kubenswrapper[2572]: W0423 18:08:11.847590 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d361232_05d8_4cb5_a2ff_ebc421268dc8.slice/crio-d3a4fc38373984bb507ce08d39eadc31ad074a4e8b34cd88cd163b1c556b91a1 WatchSource:0}: Error finding container d3a4fc38373984bb507ce08d39eadc31ad074a4e8b34cd88cd163b1c556b91a1: Status 404 returned error can't find the container with id d3a4fc38373984bb507ce08d39eadc31ad074a4e8b34cd88cd163b1c556b91a1 Apr 23 18:08:11.930202 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:11.930131 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/031646a0-9199-47f5-a4de-d800c3c34347-certificates\") pod \"keda-metrics-apiserver-7c9f485588-b6rqz\" (UID: \"031646a0-9199-47f5-a4de-d800c3c34347\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b6rqz" Apr 23 18:08:11.932497 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:11.932476 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/031646a0-9199-47f5-a4de-d800c3c34347-certificates\") pod \"keda-metrics-apiserver-7c9f485588-b6rqz\" (UID: \"031646a0-9199-47f5-a4de-d800c3c34347\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b6rqz" Apr 23 18:08:11.966758 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:11.966731 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-operator-ffbb595cb-t52dh" event={"ID":"2d361232-05d8-4cb5-a2ff-ebc421268dc8","Type":"ContainerStarted","Data":"d3a4fc38373984bb507ce08d39eadc31ad074a4e8b34cd88cd163b1c556b91a1"} Apr 23 18:08:12.032780 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:12.032748 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b6rqz" Apr 23 18:08:12.142755 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:12.142725 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-metrics-apiserver-7c9f485588-b6rqz"] Apr 23 18:08:12.145581 ip-10-0-130-162 kubenswrapper[2572]: W0423 18:08:12.145530 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod031646a0_9199_47f5_a4de_d800c3c34347.slice/crio-c3c5e31beaa617985b99db9f3812eef1d8b630dc767044659ee6e8ff66005fd6 WatchSource:0}: Error finding container c3c5e31beaa617985b99db9f3812eef1d8b630dc767044659ee6e8ff66005fd6: Status 404 returned error can't find the container with id c3c5e31beaa617985b99db9f3812eef1d8b630dc767044659ee6e8ff66005fd6 Apr 23 18:08:12.971268 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:12.971221 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b6rqz" event={"ID":"031646a0-9199-47f5-a4de-d800c3c34347","Type":"ContainerStarted","Data":"c3c5e31beaa617985b99db9f3812eef1d8b630dc767044659ee6e8ff66005fd6"} Apr 23 18:08:15.982275 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:15.982235 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-operator-ffbb595cb-t52dh" event={"ID":"2d361232-05d8-4cb5-a2ff-ebc421268dc8","Type":"ContainerStarted","Data":"01e096e349391175f2869737860d6b4cab1f208755b3ecad80a2339813a8c0fa"} Apr 23 18:08:15.982759 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:15.982365 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-operator-ffbb595cb-t52dh" Apr 23 18:08:15.983632 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:15.983607 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b6rqz" event={"ID":"031646a0-9199-47f5-a4de-d800c3c34347","Type":"ContainerStarted","Data":"a3da1ffdc09bda6430d755c6ee4def4271490fb041e1106654a8f17a8bde0b05"} Apr 23 18:08:15.983761 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:15.983726 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b6rqz" Apr 23 18:08:16.000086 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:16.000040 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/keda-operator-ffbb595cb-t52dh" podStartSLOduration=9.475706404 podStartE2EDuration="13.000027376s" podCreationTimestamp="2026-04-23 18:08:03 +0000 UTC" firstStartedPulling="2026-04-23 18:08:11.849032236 +0000 UTC m=+599.147278354" lastFinishedPulling="2026-04-23 18:08:15.373353212 +0000 UTC m=+602.671599326" observedRunningTime="2026-04-23 18:08:15.999149112 +0000 UTC m=+603.297395249" watchObservedRunningTime="2026-04-23 18:08:16.000027376 +0000 UTC m=+603.298273537" Apr 23 18:08:16.014306 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:16.014266 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b6rqz" podStartSLOduration=8.792733854 podStartE2EDuration="12.014254441s" podCreationTimestamp="2026-04-23 18:08:04 +0000 UTC" firstStartedPulling="2026-04-23 18:08:12.146895026 +0000 UTC m=+599.445141137" lastFinishedPulling="2026-04-23 18:08:15.3684156 +0000 UTC m=+602.666661724" observedRunningTime="2026-04-23 18:08:16.013815769 +0000 UTC m=+603.312061906" watchObservedRunningTime="2026-04-23 18:08:16.014254441 +0000 UTC m=+603.312500573" Apr 23 18:08:24.946763 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:24.946684 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-8zn5s" Apr 23 18:08:26.990571 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:26.990539 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-b6rqz" Apr 23 18:08:27.957558 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:27.957528 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-admission-cf49989db-7lsv5" Apr 23 18:08:36.988574 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:08:36.988545 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-operator-ffbb595cb-t52dh" Apr 23 18:09:12.158382 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.158353 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/llmisvc-controller-manager-68cc5db7c4-wjh5t"] Apr 23 18:09:12.160162 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.160147 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/llmisvc-controller-manager-68cc5db7c4-wjh5t" Apr 23 18:09:12.161923 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.161897 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/kserve-controller-manager-874ff48d-hfp4r"] Apr 23 18:09:12.163705 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.163687 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-874ff48d-hfp4r" Apr 23 18:09:12.164643 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.164627 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"llmisvc-controller-manager-dockercfg-vh8zx\"" Apr 23 18:09:12.165316 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.165301 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"kube-root-ca.crt\"" Apr 23 18:09:12.165421 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.165359 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"llmisvc-webhook-server-cert\"" Apr 23 18:09:12.165480 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.165426 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"openshift-service-ca.crt\"" Apr 23 18:09:12.166291 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.166274 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"kserve-webhook-server-cert\"" Apr 23 18:09:12.166697 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.166677 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"kserve-controller-manager-dockercfg-77lxl\"" Apr 23 18:09:12.174622 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.174603 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/llmisvc-controller-manager-68cc5db7c4-wjh5t"] Apr 23 18:09:12.177407 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.177388 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-874ff48d-hfp4r"] Apr 23 18:09:12.196214 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.196182 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/seaweedfs-86cc847c5c-7755d"] Apr 23 18:09:12.197972 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.197958 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/seaweedfs-86cc847c5c-7755d" Apr 23 18:09:12.200643 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.200625 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"default-dockercfg-4x8j6\"" Apr 23 18:09:12.201335 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.201306 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"mlpipeline-s3-artifact\"" Apr 23 18:09:12.207597 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.207577 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/seaweedfs-86cc847c5c-7755d"] Apr 23 18:09:12.331289 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.331254 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjxxm\" (UniqueName: \"kubernetes.io/projected/8473641b-fc6c-4681-ba24-a8f981f50e4a-kube-api-access-hjxxm\") pod \"kserve-controller-manager-874ff48d-hfp4r\" (UID: \"8473641b-fc6c-4681-ba24-a8f981f50e4a\") " pod="kserve/kserve-controller-manager-874ff48d-hfp4r" Apr 23 18:09:12.331459 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.331315 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvjn4\" (UniqueName: \"kubernetes.io/projected/d81180b6-523c-4ee4-98f1-28b491a846d0-kube-api-access-kvjn4\") pod \"llmisvc-controller-manager-68cc5db7c4-wjh5t\" (UID: \"d81180b6-523c-4ee4-98f1-28b491a846d0\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-wjh5t" Apr 23 18:09:12.331459 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.331364 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d86m6\" (UniqueName: \"kubernetes.io/projected/e729f255-97fe-411d-a1cf-80d1439e063f-kube-api-access-d86m6\") pod \"seaweedfs-86cc847c5c-7755d\" (UID: \"e729f255-97fe-411d-a1cf-80d1439e063f\") " pod="kserve/seaweedfs-86cc847c5c-7755d" Apr 23 18:09:12.331459 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.331400 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/e729f255-97fe-411d-a1cf-80d1439e063f-data\") pod \"seaweedfs-86cc847c5c-7755d\" (UID: \"e729f255-97fe-411d-a1cf-80d1439e063f\") " pod="kserve/seaweedfs-86cc847c5c-7755d" Apr 23 18:09:12.331459 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.331420 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8473641b-fc6c-4681-ba24-a8f981f50e4a-cert\") pod \"kserve-controller-manager-874ff48d-hfp4r\" (UID: \"8473641b-fc6c-4681-ba24-a8f981f50e4a\") " pod="kserve/kserve-controller-manager-874ff48d-hfp4r" Apr 23 18:09:12.331459 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.331440 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d81180b6-523c-4ee4-98f1-28b491a846d0-cert\") pod \"llmisvc-controller-manager-68cc5db7c4-wjh5t\" (UID: \"d81180b6-523c-4ee4-98f1-28b491a846d0\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-wjh5t" Apr 23 18:09:12.432108 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.432031 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8473641b-fc6c-4681-ba24-a8f981f50e4a-cert\") pod \"kserve-controller-manager-874ff48d-hfp4r\" (UID: \"8473641b-fc6c-4681-ba24-a8f981f50e4a\") " pod="kserve/kserve-controller-manager-874ff48d-hfp4r" Apr 23 18:09:12.432108 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.432069 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d81180b6-523c-4ee4-98f1-28b491a846d0-cert\") pod \"llmisvc-controller-manager-68cc5db7c4-wjh5t\" (UID: \"d81180b6-523c-4ee4-98f1-28b491a846d0\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-wjh5t" Apr 23 18:09:12.432258 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.432139 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hjxxm\" (UniqueName: \"kubernetes.io/projected/8473641b-fc6c-4681-ba24-a8f981f50e4a-kube-api-access-hjxxm\") pod \"kserve-controller-manager-874ff48d-hfp4r\" (UID: \"8473641b-fc6c-4681-ba24-a8f981f50e4a\") " pod="kserve/kserve-controller-manager-874ff48d-hfp4r" Apr 23 18:09:12.432258 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.432198 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kvjn4\" (UniqueName: \"kubernetes.io/projected/d81180b6-523c-4ee4-98f1-28b491a846d0-kube-api-access-kvjn4\") pod \"llmisvc-controller-manager-68cc5db7c4-wjh5t\" (UID: \"d81180b6-523c-4ee4-98f1-28b491a846d0\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-wjh5t" Apr 23 18:09:12.432258 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.432226 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d86m6\" (UniqueName: \"kubernetes.io/projected/e729f255-97fe-411d-a1cf-80d1439e063f-kube-api-access-d86m6\") pod \"seaweedfs-86cc847c5c-7755d\" (UID: \"e729f255-97fe-411d-a1cf-80d1439e063f\") " pod="kserve/seaweedfs-86cc847c5c-7755d" Apr 23 18:09:12.432258 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.432251 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/e729f255-97fe-411d-a1cf-80d1439e063f-data\") pod \"seaweedfs-86cc847c5c-7755d\" (UID: \"e729f255-97fe-411d-a1cf-80d1439e063f\") " pod="kserve/seaweedfs-86cc847c5c-7755d" Apr 23 18:09:12.432616 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.432598 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/e729f255-97fe-411d-a1cf-80d1439e063f-data\") pod \"seaweedfs-86cc847c5c-7755d\" (UID: \"e729f255-97fe-411d-a1cf-80d1439e063f\") " pod="kserve/seaweedfs-86cc847c5c-7755d" Apr 23 18:09:12.434517 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.434497 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d81180b6-523c-4ee4-98f1-28b491a846d0-cert\") pod \"llmisvc-controller-manager-68cc5db7c4-wjh5t\" (UID: \"d81180b6-523c-4ee4-98f1-28b491a846d0\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-wjh5t" Apr 23 18:09:12.434630 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.434497 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8473641b-fc6c-4681-ba24-a8f981f50e4a-cert\") pod \"kserve-controller-manager-874ff48d-hfp4r\" (UID: \"8473641b-fc6c-4681-ba24-a8f981f50e4a\") " pod="kserve/kserve-controller-manager-874ff48d-hfp4r" Apr 23 18:09:12.440950 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.440922 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d86m6\" (UniqueName: \"kubernetes.io/projected/e729f255-97fe-411d-a1cf-80d1439e063f-kube-api-access-d86m6\") pod \"seaweedfs-86cc847c5c-7755d\" (UID: \"e729f255-97fe-411d-a1cf-80d1439e063f\") " pod="kserve/seaweedfs-86cc847c5c-7755d" Apr 23 18:09:12.441633 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.441614 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjxxm\" (UniqueName: \"kubernetes.io/projected/8473641b-fc6c-4681-ba24-a8f981f50e4a-kube-api-access-hjxxm\") pod \"kserve-controller-manager-874ff48d-hfp4r\" (UID: \"8473641b-fc6c-4681-ba24-a8f981f50e4a\") " pod="kserve/kserve-controller-manager-874ff48d-hfp4r" Apr 23 18:09:12.441739 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.441687 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvjn4\" (UniqueName: \"kubernetes.io/projected/d81180b6-523c-4ee4-98f1-28b491a846d0-kube-api-access-kvjn4\") pod \"llmisvc-controller-manager-68cc5db7c4-wjh5t\" (UID: \"d81180b6-523c-4ee4-98f1-28b491a846d0\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-wjh5t" Apr 23 18:09:12.471648 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.471629 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/llmisvc-controller-manager-68cc5db7c4-wjh5t" Apr 23 18:09:12.478027 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.477991 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-874ff48d-hfp4r" Apr 23 18:09:12.506818 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.506765 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/seaweedfs-86cc847c5c-7755d" Apr 23 18:09:12.607135 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.607090 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/llmisvc-controller-manager-68cc5db7c4-wjh5t"] Apr 23 18:09:12.610030 ip-10-0-130-162 kubenswrapper[2572]: W0423 18:09:12.609999 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd81180b6_523c_4ee4_98f1_28b491a846d0.slice/crio-d73718bbd24e1273b1e110af12cf91d59cd98c00fdeed8970166d79ca8c8af48 WatchSource:0}: Error finding container d73718bbd24e1273b1e110af12cf91d59cd98c00fdeed8970166d79ca8c8af48: Status 404 returned error can't find the container with id d73718bbd24e1273b1e110af12cf91d59cd98c00fdeed8970166d79ca8c8af48 Apr 23 18:09:12.621764 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.621739 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-874ff48d-hfp4r"] Apr 23 18:09:12.623884 ip-10-0-130-162 kubenswrapper[2572]: W0423 18:09:12.623860 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8473641b_fc6c_4681_ba24_a8f981f50e4a.slice/crio-0405cb989be419977cb6215cca84fe0b0e052e8449d2fc4c04fc13931237875f WatchSource:0}: Error finding container 0405cb989be419977cb6215cca84fe0b0e052e8449d2fc4c04fc13931237875f: Status 404 returned error can't find the container with id 0405cb989be419977cb6215cca84fe0b0e052e8449d2fc4c04fc13931237875f Apr 23 18:09:12.640758 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:12.640727 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/seaweedfs-86cc847c5c-7755d"] Apr 23 18:09:12.643863 ip-10-0-130-162 kubenswrapper[2572]: W0423 18:09:12.643837 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode729f255_97fe_411d_a1cf_80d1439e063f.slice/crio-c5e275b80f97863ce9a2a56dc8f3cc8042bfd05b6d41272898574b5b6c6d09f2 WatchSource:0}: Error finding container c5e275b80f97863ce9a2a56dc8f3cc8042bfd05b6d41272898574b5b6c6d09f2: Status 404 returned error can't find the container with id c5e275b80f97863ce9a2a56dc8f3cc8042bfd05b6d41272898574b5b6c6d09f2 Apr 23 18:09:13.141653 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:13.141610 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-874ff48d-hfp4r" event={"ID":"8473641b-fc6c-4681-ba24-a8f981f50e4a","Type":"ContainerStarted","Data":"0405cb989be419977cb6215cca84fe0b0e052e8449d2fc4c04fc13931237875f"} Apr 23 18:09:13.142857 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:13.142828 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/llmisvc-controller-manager-68cc5db7c4-wjh5t" event={"ID":"d81180b6-523c-4ee4-98f1-28b491a846d0","Type":"ContainerStarted","Data":"d73718bbd24e1273b1e110af12cf91d59cd98c00fdeed8970166d79ca8c8af48"} Apr 23 18:09:13.144060 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:13.144034 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/seaweedfs-86cc847c5c-7755d" event={"ID":"e729f255-97fe-411d-a1cf-80d1439e063f","Type":"ContainerStarted","Data":"c5e275b80f97863ce9a2a56dc8f3cc8042bfd05b6d41272898574b5b6c6d09f2"} Apr 23 18:09:17.064836 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:17.064814 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"mlpipeline-s3-artifact\"" Apr 23 18:09:17.159015 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:17.158605 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-874ff48d-hfp4r" event={"ID":"8473641b-fc6c-4681-ba24-a8f981f50e4a","Type":"ContainerStarted","Data":"0d9d5bf5c54cfd4a97238c833c25febf5dbe06aa328e26e0e811349d9cdb86ea"} Apr 23 18:09:17.159168 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:17.159073 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/kserve-controller-manager-874ff48d-hfp4r" Apr 23 18:09:17.160470 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:17.160437 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/llmisvc-controller-manager-68cc5db7c4-wjh5t" event={"ID":"d81180b6-523c-4ee4-98f1-28b491a846d0","Type":"ContainerStarted","Data":"372a18518b3c72fed15370658eac2ecff5d00e0ddf36322a9c6e6a49dd0b7e68"} Apr 23 18:09:17.160580 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:17.160523 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/llmisvc-controller-manager-68cc5db7c4-wjh5t" Apr 23 18:09:17.161788 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:17.161754 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/seaweedfs-86cc847c5c-7755d" event={"ID":"e729f255-97fe-411d-a1cf-80d1439e063f","Type":"ContainerStarted","Data":"5911f1d3df78d1d64a410527b8b45828cfced68ff1aba82ab0cdc8012712890a"} Apr 23 18:09:17.161938 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:17.161906 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/seaweedfs-86cc847c5c-7755d" Apr 23 18:09:17.180045 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:17.180005 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/kserve-controller-manager-874ff48d-hfp4r" podStartSLOduration=0.950202454 podStartE2EDuration="5.179995474s" podCreationTimestamp="2026-04-23 18:09:12 +0000 UTC" firstStartedPulling="2026-04-23 18:09:12.625397305 +0000 UTC m=+659.923643423" lastFinishedPulling="2026-04-23 18:09:16.855190331 +0000 UTC m=+664.153436443" observedRunningTime="2026-04-23 18:09:17.178131773 +0000 UTC m=+664.476377906" watchObservedRunningTime="2026-04-23 18:09:17.179995474 +0000 UTC m=+664.478241628" Apr 23 18:09:17.199830 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:17.199788 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/llmisvc-controller-manager-68cc5db7c4-wjh5t" podStartSLOduration=0.785299365 podStartE2EDuration="5.199777258s" podCreationTimestamp="2026-04-23 18:09:12 +0000 UTC" firstStartedPulling="2026-04-23 18:09:12.611289288 +0000 UTC m=+659.909535404" lastFinishedPulling="2026-04-23 18:09:17.025767183 +0000 UTC m=+664.324013297" observedRunningTime="2026-04-23 18:09:17.198113119 +0000 UTC m=+664.496359263" watchObservedRunningTime="2026-04-23 18:09:17.199777258 +0000 UTC m=+664.498023391" Apr 23 18:09:17.219711 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:17.219673 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/seaweedfs-86cc847c5c-7755d" podStartSLOduration=0.8040372 podStartE2EDuration="5.219662876s" podCreationTimestamp="2026-04-23 18:09:12 +0000 UTC" firstStartedPulling="2026-04-23 18:09:12.645452996 +0000 UTC m=+659.943699107" lastFinishedPulling="2026-04-23 18:09:17.061078662 +0000 UTC m=+664.359324783" observedRunningTime="2026-04-23 18:09:17.219349273 +0000 UTC m=+664.517595403" watchObservedRunningTime="2026-04-23 18:09:17.219662876 +0000 UTC m=+664.517909009" Apr 23 18:09:23.167245 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:23.167213 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/seaweedfs-86cc847c5c-7755d" Apr 23 18:09:48.166755 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:48.166726 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/llmisvc-controller-manager-68cc5db7c4-wjh5t" Apr 23 18:09:48.169745 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:48.169721 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/kserve-controller-manager-874ff48d-hfp4r" Apr 23 18:09:49.673662 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:49.673629 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve/kserve-controller-manager-874ff48d-hfp4r"] Apr 23 18:09:49.674027 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:49.673885 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve/kserve-controller-manager-874ff48d-hfp4r" podUID="8473641b-fc6c-4681-ba24-a8f981f50e4a" containerName="manager" containerID="cri-o://0d9d5bf5c54cfd4a97238c833c25febf5dbe06aa328e26e0e811349d9cdb86ea" gracePeriod=10 Apr 23 18:09:49.699693 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:49.699670 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/kserve-controller-manager-874ff48d-qgp2f"] Apr 23 18:09:49.702716 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:49.702692 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-874ff48d-qgp2f" Apr 23 18:09:49.709982 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:49.709959 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-874ff48d-qgp2f"] Apr 23 18:09:49.788430 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:49.788394 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn2pv\" (UniqueName: \"kubernetes.io/projected/76e8acbe-712b-420e-b188-828623f502f4-kube-api-access-mn2pv\") pod \"kserve-controller-manager-874ff48d-qgp2f\" (UID: \"76e8acbe-712b-420e-b188-828623f502f4\") " pod="kserve/kserve-controller-manager-874ff48d-qgp2f" Apr 23 18:09:49.788540 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:49.788435 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76e8acbe-712b-420e-b188-828623f502f4-cert\") pod \"kserve-controller-manager-874ff48d-qgp2f\" (UID: \"76e8acbe-712b-420e-b188-828623f502f4\") " pod="kserve/kserve-controller-manager-874ff48d-qgp2f" Apr 23 18:09:49.888982 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:49.888953 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mn2pv\" (UniqueName: \"kubernetes.io/projected/76e8acbe-712b-420e-b188-828623f502f4-kube-api-access-mn2pv\") pod \"kserve-controller-manager-874ff48d-qgp2f\" (UID: \"76e8acbe-712b-420e-b188-828623f502f4\") " pod="kserve/kserve-controller-manager-874ff48d-qgp2f" Apr 23 18:09:49.889144 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:49.889003 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76e8acbe-712b-420e-b188-828623f502f4-cert\") pod \"kserve-controller-manager-874ff48d-qgp2f\" (UID: \"76e8acbe-712b-420e-b188-828623f502f4\") " pod="kserve/kserve-controller-manager-874ff48d-qgp2f" Apr 23 18:09:49.891571 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:49.891546 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76e8acbe-712b-420e-b188-828623f502f4-cert\") pod \"kserve-controller-manager-874ff48d-qgp2f\" (UID: \"76e8acbe-712b-420e-b188-828623f502f4\") " pod="kserve/kserve-controller-manager-874ff48d-qgp2f" Apr 23 18:09:49.898755 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:49.898732 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn2pv\" (UniqueName: \"kubernetes.io/projected/76e8acbe-712b-420e-b188-828623f502f4-kube-api-access-mn2pv\") pod \"kserve-controller-manager-874ff48d-qgp2f\" (UID: \"76e8acbe-712b-420e-b188-828623f502f4\") " pod="kserve/kserve-controller-manager-874ff48d-qgp2f" Apr 23 18:09:49.913229 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:49.913209 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-874ff48d-hfp4r" Apr 23 18:09:49.989376 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:49.989350 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8473641b-fc6c-4681-ba24-a8f981f50e4a-cert\") pod \"8473641b-fc6c-4681-ba24-a8f981f50e4a\" (UID: \"8473641b-fc6c-4681-ba24-a8f981f50e4a\") " Apr 23 18:09:49.989522 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:49.989391 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjxxm\" (UniqueName: \"kubernetes.io/projected/8473641b-fc6c-4681-ba24-a8f981f50e4a-kube-api-access-hjxxm\") pod \"8473641b-fc6c-4681-ba24-a8f981f50e4a\" (UID: \"8473641b-fc6c-4681-ba24-a8f981f50e4a\") " Apr 23 18:09:49.991492 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:49.991459 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8473641b-fc6c-4681-ba24-a8f981f50e4a-kube-api-access-hjxxm" (OuterVolumeSpecName: "kube-api-access-hjxxm") pod "8473641b-fc6c-4681-ba24-a8f981f50e4a" (UID: "8473641b-fc6c-4681-ba24-a8f981f50e4a"). InnerVolumeSpecName "kube-api-access-hjxxm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:09:49.991588 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:49.991500 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8473641b-fc6c-4681-ba24-a8f981f50e4a-cert" (OuterVolumeSpecName: "cert") pod "8473641b-fc6c-4681-ba24-a8f981f50e4a" (UID: "8473641b-fc6c-4681-ba24-a8f981f50e4a"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:09:50.061684 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:50.061653 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-874ff48d-qgp2f" Apr 23 18:09:50.090792 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:50.090760 2572 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8473641b-fc6c-4681-ba24-a8f981f50e4a-cert\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:09:50.090792 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:50.090791 2572 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hjxxm\" (UniqueName: \"kubernetes.io/projected/8473641b-fc6c-4681-ba24-a8f981f50e4a-kube-api-access-hjxxm\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:09:50.177837 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:50.177680 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-874ff48d-qgp2f"] Apr 23 18:09:50.180246 ip-10-0-130-162 kubenswrapper[2572]: W0423 18:09:50.180219 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76e8acbe_712b_420e_b188_828623f502f4.slice/crio-d2e994fa48e6a7fdc09bf2737082853e8d1bffcd7e64a7badcd70b411eb1086c WatchSource:0}: Error finding container d2e994fa48e6a7fdc09bf2737082853e8d1bffcd7e64a7badcd70b411eb1086c: Status 404 returned error can't find the container with id d2e994fa48e6a7fdc09bf2737082853e8d1bffcd7e64a7badcd70b411eb1086c Apr 23 18:09:50.249329 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:50.249233 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-874ff48d-qgp2f" event={"ID":"76e8acbe-712b-420e-b188-828623f502f4","Type":"ContainerStarted","Data":"d2e994fa48e6a7fdc09bf2737082853e8d1bffcd7e64a7badcd70b411eb1086c"} Apr 23 18:09:50.250265 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:50.250240 2572 generic.go:358] "Generic (PLEG): container finished" podID="8473641b-fc6c-4681-ba24-a8f981f50e4a" containerID="0d9d5bf5c54cfd4a97238c833c25febf5dbe06aa328e26e0e811349d9cdb86ea" exitCode=0 Apr 23 18:09:50.250392 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:50.250274 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-874ff48d-hfp4r" event={"ID":"8473641b-fc6c-4681-ba24-a8f981f50e4a","Type":"ContainerDied","Data":"0d9d5bf5c54cfd4a97238c833c25febf5dbe06aa328e26e0e811349d9cdb86ea"} Apr 23 18:09:50.250392 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:50.250297 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-874ff48d-hfp4r" event={"ID":"8473641b-fc6c-4681-ba24-a8f981f50e4a","Type":"ContainerDied","Data":"0405cb989be419977cb6215cca84fe0b0e052e8449d2fc4c04fc13931237875f"} Apr 23 18:09:50.250392 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:50.250306 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-874ff48d-hfp4r" Apr 23 18:09:50.250392 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:50.250316 2572 scope.go:117] "RemoveContainer" containerID="0d9d5bf5c54cfd4a97238c833c25febf5dbe06aa328e26e0e811349d9cdb86ea" Apr 23 18:09:50.258510 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:50.258492 2572 scope.go:117] "RemoveContainer" containerID="0d9d5bf5c54cfd4a97238c833c25febf5dbe06aa328e26e0e811349d9cdb86ea" Apr 23 18:09:50.258775 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:09:50.258756 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d9d5bf5c54cfd4a97238c833c25febf5dbe06aa328e26e0e811349d9cdb86ea\": container with ID starting with 0d9d5bf5c54cfd4a97238c833c25febf5dbe06aa328e26e0e811349d9cdb86ea not found: ID does not exist" containerID="0d9d5bf5c54cfd4a97238c833c25febf5dbe06aa328e26e0e811349d9cdb86ea" Apr 23 18:09:50.258835 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:50.258782 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d9d5bf5c54cfd4a97238c833c25febf5dbe06aa328e26e0e811349d9cdb86ea"} err="failed to get container status \"0d9d5bf5c54cfd4a97238c833c25febf5dbe06aa328e26e0e811349d9cdb86ea\": rpc error: code = NotFound desc = could not find container \"0d9d5bf5c54cfd4a97238c833c25febf5dbe06aa328e26e0e811349d9cdb86ea\": container with ID starting with 0d9d5bf5c54cfd4a97238c833c25febf5dbe06aa328e26e0e811349d9cdb86ea not found: ID does not exist" Apr 23 18:09:50.271099 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:50.271075 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve/kserve-controller-manager-874ff48d-hfp4r"] Apr 23 18:09:50.275312 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:50.275292 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve/kserve-controller-manager-874ff48d-hfp4r"] Apr 23 18:09:51.254571 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:51.254533 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-874ff48d-qgp2f" event={"ID":"76e8acbe-712b-420e-b188-828623f502f4","Type":"ContainerStarted","Data":"afd1edb9ec4dbcb47999304436ce06f220990abeb62e45feef8ec90d948fdd9b"} Apr 23 18:09:51.255052 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:51.254655 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/kserve-controller-manager-874ff48d-qgp2f" Apr 23 18:09:51.272113 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:51.272058 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/kserve-controller-manager-874ff48d-qgp2f" podStartSLOduration=1.726315163 podStartE2EDuration="2.272042322s" podCreationTimestamp="2026-04-23 18:09:49 +0000 UTC" firstStartedPulling="2026-04-23 18:09:50.18149179 +0000 UTC m=+697.479737907" lastFinishedPulling="2026-04-23 18:09:50.727218952 +0000 UTC m=+698.025465066" observedRunningTime="2026-04-23 18:09:51.271499147 +0000 UTC m=+698.569745279" watchObservedRunningTime="2026-04-23 18:09:51.272042322 +0000 UTC m=+698.570288456" Apr 23 18:09:51.307995 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:09:51.307964 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8473641b-fc6c-4681-ba24-a8f981f50e4a" path="/var/lib/kubelet/pods/8473641b-fc6c-4681-ba24-a8f981f50e4a/volumes" Apr 23 18:10:22.263032 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:22.263003 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/kserve-controller-manager-874ff48d-qgp2f" Apr 23 18:10:23.130529 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:23.130492 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/model-serving-api-86f7b4b499-q8wp2"] Apr 23 18:10:23.130808 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:23.130795 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8473641b-fc6c-4681-ba24-a8f981f50e4a" containerName="manager" Apr 23 18:10:23.130855 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:23.130810 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="8473641b-fc6c-4681-ba24-a8f981f50e4a" containerName="manager" Apr 23 18:10:23.130855 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:23.130852 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="8473641b-fc6c-4681-ba24-a8f981f50e4a" containerName="manager" Apr 23 18:10:23.133685 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:23.133667 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/model-serving-api-86f7b4b499-q8wp2" Apr 23 18:10:23.136259 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:23.136234 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"model-serving-api-tls\"" Apr 23 18:10:23.136465 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:23.136393 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"model-serving-api-dockercfg-ssm6p\"" Apr 23 18:10:23.144623 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:23.144599 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/model-serving-api-86f7b4b499-q8wp2"] Apr 23 18:10:23.147846 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:23.147822 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/odh-model-controller-696fc77849-w68zd"] Apr 23 18:10:23.150788 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:23.150770 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/odh-model-controller-696fc77849-w68zd" Apr 23 18:10:23.153495 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:23.153472 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"odh-model-controller-dockercfg-gjxx6\"" Apr 23 18:10:23.153652 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:23.153638 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"odh-model-controller-webhook-cert\"" Apr 23 18:10:23.158912 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:23.158893 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/odh-model-controller-696fc77849-w68zd"] Apr 23 18:10:23.212285 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:23.212251 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njrsl\" (UniqueName: \"kubernetes.io/projected/20ae28b4-7d39-4b0c-829c-58055b904524-kube-api-access-njrsl\") pod \"model-serving-api-86f7b4b499-q8wp2\" (UID: \"20ae28b4-7d39-4b0c-829c-58055b904524\") " pod="kserve/model-serving-api-86f7b4b499-q8wp2" Apr 23 18:10:23.212452 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:23.212295 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certs\" (UniqueName: \"kubernetes.io/secret/20ae28b4-7d39-4b0c-829c-58055b904524-tls-certs\") pod \"model-serving-api-86f7b4b499-q8wp2\" (UID: \"20ae28b4-7d39-4b0c-829c-58055b904524\") " pod="kserve/model-serving-api-86f7b4b499-q8wp2" Apr 23 18:10:23.212452 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:23.212360 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/18066472-d711-4871-8c9c-78d6c6b3ebe5-cert\") pod \"odh-model-controller-696fc77849-w68zd\" (UID: \"18066472-d711-4871-8c9c-78d6c6b3ebe5\") " pod="kserve/odh-model-controller-696fc77849-w68zd" Apr 23 18:10:23.212452 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:23.212389 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8lhd\" (UniqueName: \"kubernetes.io/projected/18066472-d711-4871-8c9c-78d6c6b3ebe5-kube-api-access-t8lhd\") pod \"odh-model-controller-696fc77849-w68zd\" (UID: \"18066472-d711-4871-8c9c-78d6c6b3ebe5\") " pod="kserve/odh-model-controller-696fc77849-w68zd" Apr 23 18:10:23.312835 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:23.312796 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/18066472-d711-4871-8c9c-78d6c6b3ebe5-cert\") pod \"odh-model-controller-696fc77849-w68zd\" (UID: \"18066472-d711-4871-8c9c-78d6c6b3ebe5\") " pod="kserve/odh-model-controller-696fc77849-w68zd" Apr 23 18:10:23.313234 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:23.312850 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t8lhd\" (UniqueName: \"kubernetes.io/projected/18066472-d711-4871-8c9c-78d6c6b3ebe5-kube-api-access-t8lhd\") pod \"odh-model-controller-696fc77849-w68zd\" (UID: \"18066472-d711-4871-8c9c-78d6c6b3ebe5\") " pod="kserve/odh-model-controller-696fc77849-w68zd" Apr 23 18:10:23.313234 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:23.312877 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-njrsl\" (UniqueName: \"kubernetes.io/projected/20ae28b4-7d39-4b0c-829c-58055b904524-kube-api-access-njrsl\") pod \"model-serving-api-86f7b4b499-q8wp2\" (UID: \"20ae28b4-7d39-4b0c-829c-58055b904524\") " pod="kserve/model-serving-api-86f7b4b499-q8wp2" Apr 23 18:10:23.313234 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:23.312904 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-certs\" (UniqueName: \"kubernetes.io/secret/20ae28b4-7d39-4b0c-829c-58055b904524-tls-certs\") pod \"model-serving-api-86f7b4b499-q8wp2\" (UID: \"20ae28b4-7d39-4b0c-829c-58055b904524\") " pod="kserve/model-serving-api-86f7b4b499-q8wp2" Apr 23 18:10:23.313234 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:10:23.312939 2572 secret.go:189] Couldn't get secret kserve/odh-model-controller-webhook-cert: secret "odh-model-controller-webhook-cert" not found Apr 23 18:10:23.313234 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:10:23.312992 2572 secret.go:189] Couldn't get secret kserve/model-serving-api-tls: secret "model-serving-api-tls" not found Apr 23 18:10:23.313234 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:10:23.313003 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18066472-d711-4871-8c9c-78d6c6b3ebe5-cert podName:18066472-d711-4871-8c9c-78d6c6b3ebe5 nodeName:}" failed. No retries permitted until 2026-04-23 18:10:23.812987033 +0000 UTC m=+731.111233144 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/18066472-d711-4871-8c9c-78d6c6b3ebe5-cert") pod "odh-model-controller-696fc77849-w68zd" (UID: "18066472-d711-4871-8c9c-78d6c6b3ebe5") : secret "odh-model-controller-webhook-cert" not found Apr 23 18:10:23.313234 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:10:23.313037 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20ae28b4-7d39-4b0c-829c-58055b904524-tls-certs podName:20ae28b4-7d39-4b0c-829c-58055b904524 nodeName:}" failed. No retries permitted until 2026-04-23 18:10:23.813023114 +0000 UTC m=+731.111269229 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certs" (UniqueName: "kubernetes.io/secret/20ae28b4-7d39-4b0c-829c-58055b904524-tls-certs") pod "model-serving-api-86f7b4b499-q8wp2" (UID: "20ae28b4-7d39-4b0c-829c-58055b904524") : secret "model-serving-api-tls" not found Apr 23 18:10:23.326608 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:23.326580 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8lhd\" (UniqueName: \"kubernetes.io/projected/18066472-d711-4871-8c9c-78d6c6b3ebe5-kube-api-access-t8lhd\") pod \"odh-model-controller-696fc77849-w68zd\" (UID: \"18066472-d711-4871-8c9c-78d6c6b3ebe5\") " pod="kserve/odh-model-controller-696fc77849-w68zd" Apr 23 18:10:23.326721 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:23.326625 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-njrsl\" (UniqueName: \"kubernetes.io/projected/20ae28b4-7d39-4b0c-829c-58055b904524-kube-api-access-njrsl\") pod \"model-serving-api-86f7b4b499-q8wp2\" (UID: \"20ae28b4-7d39-4b0c-829c-58055b904524\") " pod="kserve/model-serving-api-86f7b4b499-q8wp2" Apr 23 18:10:23.817598 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:23.817561 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-certs\" (UniqueName: \"kubernetes.io/secret/20ae28b4-7d39-4b0c-829c-58055b904524-tls-certs\") pod \"model-serving-api-86f7b4b499-q8wp2\" (UID: \"20ae28b4-7d39-4b0c-829c-58055b904524\") " pod="kserve/model-serving-api-86f7b4b499-q8wp2" Apr 23 18:10:23.817763 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:23.817623 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/18066472-d711-4871-8c9c-78d6c6b3ebe5-cert\") pod \"odh-model-controller-696fc77849-w68zd\" (UID: \"18066472-d711-4871-8c9c-78d6c6b3ebe5\") " pod="kserve/odh-model-controller-696fc77849-w68zd" Apr 23 18:10:23.820170 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:23.820142 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/18066472-d711-4871-8c9c-78d6c6b3ebe5-cert\") pod \"odh-model-controller-696fc77849-w68zd\" (UID: \"18066472-d711-4871-8c9c-78d6c6b3ebe5\") " pod="kserve/odh-model-controller-696fc77849-w68zd" Apr 23 18:10:23.820276 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:23.820213 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-certs\" (UniqueName: \"kubernetes.io/secret/20ae28b4-7d39-4b0c-829c-58055b904524-tls-certs\") pod \"model-serving-api-86f7b4b499-q8wp2\" (UID: \"20ae28b4-7d39-4b0c-829c-58055b904524\") " pod="kserve/model-serving-api-86f7b4b499-q8wp2" Apr 23 18:10:24.043736 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:24.043690 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/model-serving-api-86f7b4b499-q8wp2" Apr 23 18:10:24.060475 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:24.060441 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/odh-model-controller-696fc77849-w68zd" Apr 23 18:10:24.173439 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:24.173414 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/model-serving-api-86f7b4b499-q8wp2"] Apr 23 18:10:24.175476 ip-10-0-130-162 kubenswrapper[2572]: W0423 18:10:24.175432 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20ae28b4_7d39_4b0c_829c_58055b904524.slice/crio-acab2221417b66df2be49074cd045f6a56b045ccf97a3038db48013b309e163d WatchSource:0}: Error finding container acab2221417b66df2be49074cd045f6a56b045ccf97a3038db48013b309e163d: Status 404 returned error can't find the container with id acab2221417b66df2be49074cd045f6a56b045ccf97a3038db48013b309e163d Apr 23 18:10:24.188744 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:24.188723 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/odh-model-controller-696fc77849-w68zd"] Apr 23 18:10:24.190783 ip-10-0-130-162 kubenswrapper[2572]: W0423 18:10:24.190761 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18066472_d711_4871_8c9c_78d6c6b3ebe5.slice/crio-84d54d4c7b3d51ecb99a7c324c1d22ef023a542166fd56a55302de2e941a2e12 WatchSource:0}: Error finding container 84d54d4c7b3d51ecb99a7c324c1d22ef023a542166fd56a55302de2e941a2e12: Status 404 returned error can't find the container with id 84d54d4c7b3d51ecb99a7c324c1d22ef023a542166fd56a55302de2e941a2e12 Apr 23 18:10:24.340241 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:24.340162 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/odh-model-controller-696fc77849-w68zd" event={"ID":"18066472-d711-4871-8c9c-78d6c6b3ebe5","Type":"ContainerStarted","Data":"84d54d4c7b3d51ecb99a7c324c1d22ef023a542166fd56a55302de2e941a2e12"} Apr 23 18:10:24.341049 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:24.341027 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/model-serving-api-86f7b4b499-q8wp2" event={"ID":"20ae28b4-7d39-4b0c-829c-58055b904524","Type":"ContainerStarted","Data":"acab2221417b66df2be49074cd045f6a56b045ccf97a3038db48013b309e163d"} Apr 23 18:10:26.350986 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:26.350930 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/model-serving-api-86f7b4b499-q8wp2" event={"ID":"20ae28b4-7d39-4b0c-829c-58055b904524","Type":"ContainerStarted","Data":"508ad2e5e2d30489db8dbdaee36e6d2c5e7df2b691136803feb291d6da54bee1"} Apr 23 18:10:26.351466 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:26.351010 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/model-serving-api-86f7b4b499-q8wp2" Apr 23 18:10:26.370754 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:26.370702 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/model-serving-api-86f7b4b499-q8wp2" podStartSLOduration=1.984712725 podStartE2EDuration="3.370687851s" podCreationTimestamp="2026-04-23 18:10:23 +0000 UTC" firstStartedPulling="2026-04-23 18:10:24.177195405 +0000 UTC m=+731.475441517" lastFinishedPulling="2026-04-23 18:10:25.563170532 +0000 UTC m=+732.861416643" observedRunningTime="2026-04-23 18:10:26.36898967 +0000 UTC m=+733.667235806" watchObservedRunningTime="2026-04-23 18:10:26.370687851 +0000 UTC m=+733.668933983" Apr 23 18:10:27.354694 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:27.354660 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/odh-model-controller-696fc77849-w68zd" event={"ID":"18066472-d711-4871-8c9c-78d6c6b3ebe5","Type":"ContainerStarted","Data":"e7842bbf12597555c6ee504bf4651c4d91d1c29d02f02a227942eb7e5e6768a1"} Apr 23 18:10:27.355112 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:27.354888 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/odh-model-controller-696fc77849-w68zd" Apr 23 18:10:27.373934 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:27.373894 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/odh-model-controller-696fc77849-w68zd" podStartSLOduration=1.71219193 podStartE2EDuration="4.373881755s" podCreationTimestamp="2026-04-23 18:10:23 +0000 UTC" firstStartedPulling="2026-04-23 18:10:24.191882625 +0000 UTC m=+731.490128736" lastFinishedPulling="2026-04-23 18:10:26.853572451 +0000 UTC m=+734.151818561" observedRunningTime="2026-04-23 18:10:27.372981137 +0000 UTC m=+734.671227269" watchObservedRunningTime="2026-04-23 18:10:27.373881755 +0000 UTC m=+734.672127887" Apr 23 18:10:37.359004 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:37.358976 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/model-serving-api-86f7b4b499-q8wp2" Apr 23 18:10:38.359771 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:38.359740 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/odh-model-controller-696fc77849-w68zd" Apr 23 18:10:39.219192 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:39.219155 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/s3-init-cj7j6"] Apr 23 18:10:39.222382 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:39.222360 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/s3-init-cj7j6" Apr 23 18:10:39.228903 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:39.228875 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/s3-init-cj7j6"] Apr 23 18:10:39.328919 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:39.328889 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7wjh\" (UniqueName: \"kubernetes.io/projected/3dec7eb0-fce9-4a8c-ba06-5acf0b5ce44f-kube-api-access-d7wjh\") pod \"s3-init-cj7j6\" (UID: \"3dec7eb0-fce9-4a8c-ba06-5acf0b5ce44f\") " pod="kserve/s3-init-cj7j6" Apr 23 18:10:39.429483 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:39.429450 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d7wjh\" (UniqueName: \"kubernetes.io/projected/3dec7eb0-fce9-4a8c-ba06-5acf0b5ce44f-kube-api-access-d7wjh\") pod \"s3-init-cj7j6\" (UID: \"3dec7eb0-fce9-4a8c-ba06-5acf0b5ce44f\") " pod="kserve/s3-init-cj7j6" Apr 23 18:10:39.438418 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:39.438394 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7wjh\" (UniqueName: \"kubernetes.io/projected/3dec7eb0-fce9-4a8c-ba06-5acf0b5ce44f-kube-api-access-d7wjh\") pod \"s3-init-cj7j6\" (UID: \"3dec7eb0-fce9-4a8c-ba06-5acf0b5ce44f\") " pod="kserve/s3-init-cj7j6" Apr 23 18:10:39.544338 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:39.544290 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/s3-init-cj7j6" Apr 23 18:10:39.677729 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:39.677707 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/s3-init-cj7j6"] Apr 23 18:10:39.680098 ip-10-0-130-162 kubenswrapper[2572]: W0423 18:10:39.680069 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3dec7eb0_fce9_4a8c_ba06_5acf0b5ce44f.slice/crio-8a6348620c75169a97007405cd1c85765613e0df677927d99a6f81737ce049a5 WatchSource:0}: Error finding container 8a6348620c75169a97007405cd1c85765613e0df677927d99a6f81737ce049a5: Status 404 returned error can't find the container with id 8a6348620c75169a97007405cd1c85765613e0df677927d99a6f81737ce049a5 Apr 23 18:10:40.388740 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:40.388690 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/s3-init-cj7j6" event={"ID":"3dec7eb0-fce9-4a8c-ba06-5acf0b5ce44f","Type":"ContainerStarted","Data":"8a6348620c75169a97007405cd1c85765613e0df677927d99a6f81737ce049a5"} Apr 23 18:10:44.402309 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:44.402222 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/s3-init-cj7j6" event={"ID":"3dec7eb0-fce9-4a8c-ba06-5acf0b5ce44f","Type":"ContainerStarted","Data":"909a5fa8101444c8633478944504a15b40451bb8654300f05f99d53495814d12"} Apr 23 18:10:44.418667 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:44.418609 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/s3-init-cj7j6" podStartSLOduration=1.061424532 podStartE2EDuration="5.418590197s" podCreationTimestamp="2026-04-23 18:10:39 +0000 UTC" firstStartedPulling="2026-04-23 18:10:39.681846863 +0000 UTC m=+746.980092974" lastFinishedPulling="2026-04-23 18:10:44.039012525 +0000 UTC m=+751.337258639" observedRunningTime="2026-04-23 18:10:44.41665516 +0000 UTC m=+751.714901294" watchObservedRunningTime="2026-04-23 18:10:44.418590197 +0000 UTC m=+751.716836331" Apr 23 18:10:47.412260 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:47.412220 2572 generic.go:358] "Generic (PLEG): container finished" podID="3dec7eb0-fce9-4a8c-ba06-5acf0b5ce44f" containerID="909a5fa8101444c8633478944504a15b40451bb8654300f05f99d53495814d12" exitCode=0 Apr 23 18:10:47.412688 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:47.412265 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/s3-init-cj7j6" event={"ID":"3dec7eb0-fce9-4a8c-ba06-5acf0b5ce44f","Type":"ContainerDied","Data":"909a5fa8101444c8633478944504a15b40451bb8654300f05f99d53495814d12"} Apr 23 18:10:48.543458 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:48.543436 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/s3-init-cj7j6" Apr 23 18:10:48.703795 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:48.703697 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7wjh\" (UniqueName: \"kubernetes.io/projected/3dec7eb0-fce9-4a8c-ba06-5acf0b5ce44f-kube-api-access-d7wjh\") pod \"3dec7eb0-fce9-4a8c-ba06-5acf0b5ce44f\" (UID: \"3dec7eb0-fce9-4a8c-ba06-5acf0b5ce44f\") " Apr 23 18:10:48.705722 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:48.705692 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dec7eb0-fce9-4a8c-ba06-5acf0b5ce44f-kube-api-access-d7wjh" (OuterVolumeSpecName: "kube-api-access-d7wjh") pod "3dec7eb0-fce9-4a8c-ba06-5acf0b5ce44f" (UID: "3dec7eb0-fce9-4a8c-ba06-5acf0b5ce44f"). InnerVolumeSpecName "kube-api-access-d7wjh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:10:48.804876 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:48.804831 2572 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7wjh\" (UniqueName: \"kubernetes.io/projected/3dec7eb0-fce9-4a8c-ba06-5acf0b5ce44f-kube-api-access-d7wjh\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:10:49.418541 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:49.418513 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/s3-init-cj7j6" Apr 23 18:10:49.418541 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:49.418518 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/s3-init-cj7j6" event={"ID":"3dec7eb0-fce9-4a8c-ba06-5acf0b5ce44f","Type":"ContainerDied","Data":"8a6348620c75169a97007405cd1c85765613e0df677927d99a6f81737ce049a5"} Apr 23 18:10:49.418541 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:49.418554 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a6348620c75169a97007405cd1c85765613e0df677927d99a6f81737ce049a5" Apr 23 18:10:59.460440 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.460407 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg"] Apr 23 18:10:59.460920 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.460697 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3dec7eb0-fce9-4a8c-ba06-5acf0b5ce44f" containerName="s3-init" Apr 23 18:10:59.460920 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.460709 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dec7eb0-fce9-4a8c-ba06-5acf0b5ce44f" containerName="s3-init" Apr 23 18:10:59.460920 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.460752 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="3dec7eb0-fce9-4a8c-ba06-5acf0b5ce44f" containerName="s3-init" Apr 23 18:10:59.465220 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.465199 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" Apr 23 18:10:59.468036 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.468016 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"kube-root-ca.crt\"" Apr 23 18:10:59.468130 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.468012 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"openshift-service-ca.crt\"" Apr 23 18:10:59.469135 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.469088 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"isvc-sklearn-graph-1-kube-rbac-proxy-sar-config\"" Apr 23 18:10:59.469275 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.469141 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"isvc-sklearn-graph-1-predictor-serving-cert\"" Apr 23 18:10:59.469275 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.469218 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"default-dockercfg-b65vr\"" Apr 23 18:10:59.473914 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.473894 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg"] Apr 23 18:10:59.579219 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.579191 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"isvc-sklearn-graph-1-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/04269895-41e9-4b40-a921-26abf7648545-isvc-sklearn-graph-1-kube-rbac-proxy-sar-config\") pod \"isvc-sklearn-graph-1-predictor-588b77b98-h9jfg\" (UID: \"04269895-41e9-4b40-a921-26abf7648545\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" Apr 23 18:10:59.579389 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.579225 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/04269895-41e9-4b40-a921-26abf7648545-proxy-tls\") pod \"isvc-sklearn-graph-1-predictor-588b77b98-h9jfg\" (UID: \"04269895-41e9-4b40-a921-26abf7648545\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" Apr 23 18:10:59.579389 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.579268 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/04269895-41e9-4b40-a921-26abf7648545-kserve-provision-location\") pod \"isvc-sklearn-graph-1-predictor-588b77b98-h9jfg\" (UID: \"04269895-41e9-4b40-a921-26abf7648545\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" Apr 23 18:10:59.579389 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.579304 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hmml\" (UniqueName: \"kubernetes.io/projected/04269895-41e9-4b40-a921-26abf7648545-kube-api-access-9hmml\") pod \"isvc-sklearn-graph-1-predictor-588b77b98-h9jfg\" (UID: \"04269895-41e9-4b40-a921-26abf7648545\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" Apr 23 18:10:59.680605 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.680566 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"isvc-sklearn-graph-1-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/04269895-41e9-4b40-a921-26abf7648545-isvc-sklearn-graph-1-kube-rbac-proxy-sar-config\") pod \"isvc-sklearn-graph-1-predictor-588b77b98-h9jfg\" (UID: \"04269895-41e9-4b40-a921-26abf7648545\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" Apr 23 18:10:59.680605 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.680605 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/04269895-41e9-4b40-a921-26abf7648545-proxy-tls\") pod \"isvc-sklearn-graph-1-predictor-588b77b98-h9jfg\" (UID: \"04269895-41e9-4b40-a921-26abf7648545\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" Apr 23 18:10:59.680808 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.680638 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/04269895-41e9-4b40-a921-26abf7648545-kserve-provision-location\") pod \"isvc-sklearn-graph-1-predictor-588b77b98-h9jfg\" (UID: \"04269895-41e9-4b40-a921-26abf7648545\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" Apr 23 18:10:59.680808 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.680662 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9hmml\" (UniqueName: \"kubernetes.io/projected/04269895-41e9-4b40-a921-26abf7648545-kube-api-access-9hmml\") pod \"isvc-sklearn-graph-1-predictor-588b77b98-h9jfg\" (UID: \"04269895-41e9-4b40-a921-26abf7648545\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" Apr 23 18:10:59.681051 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.681030 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/04269895-41e9-4b40-a921-26abf7648545-kserve-provision-location\") pod \"isvc-sklearn-graph-1-predictor-588b77b98-h9jfg\" (UID: \"04269895-41e9-4b40-a921-26abf7648545\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" Apr 23 18:10:59.681307 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.681289 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"isvc-sklearn-graph-1-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/04269895-41e9-4b40-a921-26abf7648545-isvc-sklearn-graph-1-kube-rbac-proxy-sar-config\") pod \"isvc-sklearn-graph-1-predictor-588b77b98-h9jfg\" (UID: \"04269895-41e9-4b40-a921-26abf7648545\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" Apr 23 18:10:59.682982 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.682953 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/04269895-41e9-4b40-a921-26abf7648545-proxy-tls\") pod \"isvc-sklearn-graph-1-predictor-588b77b98-h9jfg\" (UID: \"04269895-41e9-4b40-a921-26abf7648545\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" Apr 23 18:10:59.699609 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.699584 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hmml\" (UniqueName: \"kubernetes.io/projected/04269895-41e9-4b40-a921-26abf7648545-kube-api-access-9hmml\") pod \"isvc-sklearn-graph-1-predictor-588b77b98-h9jfg\" (UID: \"04269895-41e9-4b40-a921-26abf7648545\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" Apr 23 18:10:59.725558 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.725494 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m"] Apr 23 18:10:59.729306 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.729288 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" Apr 23 18:10:59.732304 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.732272 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"isvc-xgboost-graph-kube-rbac-proxy-sar-config\"" Apr 23 18:10:59.732497 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.732481 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"isvc-xgboost-graph-predictor-serving-cert\"" Apr 23 18:10:59.748969 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.748942 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m"] Apr 23 18:10:59.777055 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.777031 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" Apr 23 18:10:59.881809 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.881778 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"isvc-xgboost-graph-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/f760979b-5b29-4f7b-99b5-00538b24bd1d-isvc-xgboost-graph-kube-rbac-proxy-sar-config\") pod \"isvc-xgboost-graph-predictor-669d8d6456-htg4m\" (UID: \"f760979b-5b29-4f7b-99b5-00538b24bd1d\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" Apr 23 18:10:59.881968 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.881819 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/f760979b-5b29-4f7b-99b5-00538b24bd1d-kserve-provision-location\") pod \"isvc-xgboost-graph-predictor-669d8d6456-htg4m\" (UID: \"f760979b-5b29-4f7b-99b5-00538b24bd1d\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" Apr 23 18:10:59.881968 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.881857 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6696\" (UniqueName: \"kubernetes.io/projected/f760979b-5b29-4f7b-99b5-00538b24bd1d-kube-api-access-k6696\") pod \"isvc-xgboost-graph-predictor-669d8d6456-htg4m\" (UID: \"f760979b-5b29-4f7b-99b5-00538b24bd1d\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" Apr 23 18:10:59.881968 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.881899 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f760979b-5b29-4f7b-99b5-00538b24bd1d-proxy-tls\") pod \"isvc-xgboost-graph-predictor-669d8d6456-htg4m\" (UID: \"f760979b-5b29-4f7b-99b5-00538b24bd1d\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" Apr 23 18:10:59.898125 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.898100 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg"] Apr 23 18:10:59.900000 ip-10-0-130-162 kubenswrapper[2572]: W0423 18:10:59.899972 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04269895_41e9_4b40_a921_26abf7648545.slice/crio-73065331ab3ec4e9db5978645e0fed5d15ff9870bd5de707680d9f61b2a24014 WatchSource:0}: Error finding container 73065331ab3ec4e9db5978645e0fed5d15ff9870bd5de707680d9f61b2a24014: Status 404 returned error can't find the container with id 73065331ab3ec4e9db5978645e0fed5d15ff9870bd5de707680d9f61b2a24014 Apr 23 18:10:59.982952 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.982923 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"isvc-xgboost-graph-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/f760979b-5b29-4f7b-99b5-00538b24bd1d-isvc-xgboost-graph-kube-rbac-proxy-sar-config\") pod \"isvc-xgboost-graph-predictor-669d8d6456-htg4m\" (UID: \"f760979b-5b29-4f7b-99b5-00538b24bd1d\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" Apr 23 18:10:59.983079 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.982967 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/f760979b-5b29-4f7b-99b5-00538b24bd1d-kserve-provision-location\") pod \"isvc-xgboost-graph-predictor-669d8d6456-htg4m\" (UID: \"f760979b-5b29-4f7b-99b5-00538b24bd1d\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" Apr 23 18:10:59.983079 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.983007 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k6696\" (UniqueName: \"kubernetes.io/projected/f760979b-5b29-4f7b-99b5-00538b24bd1d-kube-api-access-k6696\") pod \"isvc-xgboost-graph-predictor-669d8d6456-htg4m\" (UID: \"f760979b-5b29-4f7b-99b5-00538b24bd1d\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" Apr 23 18:10:59.983079 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.983053 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f760979b-5b29-4f7b-99b5-00538b24bd1d-proxy-tls\") pod \"isvc-xgboost-graph-predictor-669d8d6456-htg4m\" (UID: \"f760979b-5b29-4f7b-99b5-00538b24bd1d\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" Apr 23 18:10:59.983463 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.983441 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/f760979b-5b29-4f7b-99b5-00538b24bd1d-kserve-provision-location\") pod \"isvc-xgboost-graph-predictor-669d8d6456-htg4m\" (UID: \"f760979b-5b29-4f7b-99b5-00538b24bd1d\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" Apr 23 18:10:59.983736 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.983716 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"isvc-xgboost-graph-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/f760979b-5b29-4f7b-99b5-00538b24bd1d-isvc-xgboost-graph-kube-rbac-proxy-sar-config\") pod \"isvc-xgboost-graph-predictor-669d8d6456-htg4m\" (UID: \"f760979b-5b29-4f7b-99b5-00538b24bd1d\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" Apr 23 18:10:59.985468 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.985452 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f760979b-5b29-4f7b-99b5-00538b24bd1d-proxy-tls\") pod \"isvc-xgboost-graph-predictor-669d8d6456-htg4m\" (UID: \"f760979b-5b29-4f7b-99b5-00538b24bd1d\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" Apr 23 18:10:59.991459 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:10:59.991438 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6696\" (UniqueName: \"kubernetes.io/projected/f760979b-5b29-4f7b-99b5-00538b24bd1d-kube-api-access-k6696\") pod \"isvc-xgboost-graph-predictor-669d8d6456-htg4m\" (UID: \"f760979b-5b29-4f7b-99b5-00538b24bd1d\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" Apr 23 18:11:00.041422 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:11:00.041380 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" Apr 23 18:11:00.167053 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:11:00.167026 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m"] Apr 23 18:11:00.169841 ip-10-0-130-162 kubenswrapper[2572]: W0423 18:11:00.169810 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf760979b_5b29_4f7b_99b5_00538b24bd1d.slice/crio-3e6020979897c442fbdc29ba6552c9a3424c1e8b7b8abbc0eab6ba7a6d1a5d90 WatchSource:0}: Error finding container 3e6020979897c442fbdc29ba6552c9a3424c1e8b7b8abbc0eab6ba7a6d1a5d90: Status 404 returned error can't find the container with id 3e6020979897c442fbdc29ba6552c9a3424c1e8b7b8abbc0eab6ba7a6d1a5d90 Apr 23 18:11:00.449936 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:11:00.449845 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" event={"ID":"04269895-41e9-4b40-a921-26abf7648545","Type":"ContainerStarted","Data":"73065331ab3ec4e9db5978645e0fed5d15ff9870bd5de707680d9f61b2a24014"} Apr 23 18:11:00.450855 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:11:00.450832 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" event={"ID":"f760979b-5b29-4f7b-99b5-00538b24bd1d","Type":"ContainerStarted","Data":"3e6020979897c442fbdc29ba6552c9a3424c1e8b7b8abbc0eab6ba7a6d1a5d90"} Apr 23 18:11:04.467343 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:11:04.467227 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" event={"ID":"f760979b-5b29-4f7b-99b5-00538b24bd1d","Type":"ContainerStarted","Data":"bf1d7d49ce3dc2cede3cdaf5aeb0d4c4d57f049f70b26a01921eb28fd8025b42"} Apr 23 18:11:04.468875 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:11:04.468833 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" event={"ID":"04269895-41e9-4b40-a921-26abf7648545","Type":"ContainerStarted","Data":"4c3c668dc1bed2559d1b3e6eba5d5735dd1cecb4e5a404fe6168bc0ce83b1fce"} Apr 23 18:11:08.484482 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:11:08.484447 2572 generic.go:358] "Generic (PLEG): container finished" podID="f760979b-5b29-4f7b-99b5-00538b24bd1d" containerID="bf1d7d49ce3dc2cede3cdaf5aeb0d4c4d57f049f70b26a01921eb28fd8025b42" exitCode=0 Apr 23 18:11:08.484949 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:11:08.484533 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" event={"ID":"f760979b-5b29-4f7b-99b5-00538b24bd1d","Type":"ContainerDied","Data":"bf1d7d49ce3dc2cede3cdaf5aeb0d4c4d57f049f70b26a01921eb28fd8025b42"} Apr 23 18:11:08.485939 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:11:08.485918 2572 generic.go:358] "Generic (PLEG): container finished" podID="04269895-41e9-4b40-a921-26abf7648545" containerID="4c3c668dc1bed2559d1b3e6eba5d5735dd1cecb4e5a404fe6168bc0ce83b1fce" exitCode=0 Apr 23 18:11:08.486002 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:11:08.485963 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" event={"ID":"04269895-41e9-4b40-a921-26abf7648545","Type":"ContainerDied","Data":"4c3c668dc1bed2559d1b3e6eba5d5735dd1cecb4e5a404fe6168bc0ce83b1fce"} Apr 23 18:11:33.581851 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:11:33.581759 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" event={"ID":"04269895-41e9-4b40-a921-26abf7648545","Type":"ContainerStarted","Data":"c171b2b476f7a1602897cc3faa4f40ace632402bf8fdfd8989341a60370a7983"} Apr 23 18:11:33.583802 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:11:33.583765 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" event={"ID":"f760979b-5b29-4f7b-99b5-00538b24bd1d","Type":"ContainerStarted","Data":"0a53882baed6bda5f565a4587acd839dd14776c57325028dccfd19f4fd239fd3"} Apr 23 18:11:36.595726 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:11:36.595632 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" event={"ID":"f760979b-5b29-4f7b-99b5-00538b24bd1d","Type":"ContainerStarted","Data":"77c4fdc10ee22b1b0143c052dcc842ae74cfba5bdea3c5692ad0e1b5454f3c6c"} Apr 23 18:11:36.596136 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:11:36.595807 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" Apr 23 18:11:36.596136 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:11:36.595826 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" Apr 23 18:11:36.597413 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:11:36.597362 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" podUID="f760979b-5b29-4f7b-99b5-00538b24bd1d" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.21:8080: connect: connection refused" Apr 23 18:11:36.597616 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:11:36.597596 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" event={"ID":"04269895-41e9-4b40-a921-26abf7648545","Type":"ContainerStarted","Data":"b5e6de4cabf5a3e50c268386ce0a2839f14d5120c77c5643ae8f10088457b192"} Apr 23 18:11:36.597838 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:11:36.597821 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" Apr 23 18:11:36.597891 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:11:36.597850 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" Apr 23 18:11:36.598725 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:11:36.598703 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" podUID="04269895-41e9-4b40-a921-26abf7648545" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.20:8080: connect: connection refused" Apr 23 18:11:36.615579 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:11:36.615528 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" podStartSLOduration=1.441987676 podStartE2EDuration="37.615512266s" podCreationTimestamp="2026-04-23 18:10:59 +0000 UTC" firstStartedPulling="2026-04-23 18:11:00.171870452 +0000 UTC m=+767.470116564" lastFinishedPulling="2026-04-23 18:11:36.345395042 +0000 UTC m=+803.643641154" observedRunningTime="2026-04-23 18:11:36.614693073 +0000 UTC m=+803.912939210" watchObservedRunningTime="2026-04-23 18:11:36.615512266 +0000 UTC m=+803.913758398" Apr 23 18:11:36.633853 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:11:36.633806 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" podStartSLOduration=1.181816519 podStartE2EDuration="37.633787488s" podCreationTimestamp="2026-04-23 18:10:59 +0000 UTC" firstStartedPulling="2026-04-23 18:10:59.901952048 +0000 UTC m=+767.200198165" lastFinishedPulling="2026-04-23 18:11:36.35392302 +0000 UTC m=+803.652169134" observedRunningTime="2026-04-23 18:11:36.632918821 +0000 UTC m=+803.931164986" watchObservedRunningTime="2026-04-23 18:11:36.633787488 +0000 UTC m=+803.932033622" Apr 23 18:11:37.600955 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:11:37.600912 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" podUID="f760979b-5b29-4f7b-99b5-00538b24bd1d" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.21:8080: connect: connection refused" Apr 23 18:11:37.601345 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:11:37.600992 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" podUID="04269895-41e9-4b40-a921-26abf7648545" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.20:8080: connect: connection refused" Apr 23 18:11:42.605344 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:11:42.605287 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" Apr 23 18:11:42.605758 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:11:42.605501 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" Apr 23 18:11:42.605758 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:11:42.605718 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" podUID="04269895-41e9-4b40-a921-26abf7648545" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.20:8080: connect: connection refused" Apr 23 18:11:42.606172 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:11:42.606152 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" podUID="f760979b-5b29-4f7b-99b5-00538b24bd1d" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.21:8080: connect: connection refused" Apr 23 18:11:52.606110 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:11:52.606064 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" podUID="04269895-41e9-4b40-a921-26abf7648545" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.20:8080: connect: connection refused" Apr 23 18:11:52.607071 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:11:52.606397 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" podUID="f760979b-5b29-4f7b-99b5-00538b24bd1d" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.21:8080: connect: connection refused" Apr 23 18:12:02.605885 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:02.605841 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" podUID="04269895-41e9-4b40-a921-26abf7648545" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.20:8080: connect: connection refused" Apr 23 18:12:02.606307 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:02.606265 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" podUID="f760979b-5b29-4f7b-99b5-00538b24bd1d" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.21:8080: connect: connection refused" Apr 23 18:12:12.605671 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:12.605630 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" podUID="04269895-41e9-4b40-a921-26abf7648545" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.20:8080: connect: connection refused" Apr 23 18:12:12.606248 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:12.606127 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" podUID="f760979b-5b29-4f7b-99b5-00538b24bd1d" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.21:8080: connect: connection refused" Apr 23 18:12:19.391231 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:19.391193 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/switch-graph-be430-657bc8887b-gmv4m"] Apr 23 18:12:19.411632 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:19.411605 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/switch-graph-be430-657bc8887b-gmv4m"] Apr 23 18:12:19.411769 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:19.411715 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-be430-657bc8887b-gmv4m" Apr 23 18:12:19.414218 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:19.414193 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"switch-graph-be430-serving-cert\"" Apr 23 18:12:19.414357 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:19.414218 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"switch-graph-be430-kube-rbac-proxy-sar-config\"" Apr 23 18:12:19.491977 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:19.491942 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/94cc07fa-51f6-458a-868d-d9f708940d34-proxy-tls\") pod \"switch-graph-be430-657bc8887b-gmv4m\" (UID: \"94cc07fa-51f6-458a-868d-d9f708940d34\") " pod="kserve-ci-e2e-test/switch-graph-be430-657bc8887b-gmv4m" Apr 23 18:12:19.492119 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:19.491988 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94cc07fa-51f6-458a-868d-d9f708940d34-openshift-service-ca-bundle\") pod \"switch-graph-be430-657bc8887b-gmv4m\" (UID: \"94cc07fa-51f6-458a-868d-d9f708940d34\") " pod="kserve-ci-e2e-test/switch-graph-be430-657bc8887b-gmv4m" Apr 23 18:12:19.593288 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:19.593254 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94cc07fa-51f6-458a-868d-d9f708940d34-openshift-service-ca-bundle\") pod \"switch-graph-be430-657bc8887b-gmv4m\" (UID: \"94cc07fa-51f6-458a-868d-d9f708940d34\") " pod="kserve-ci-e2e-test/switch-graph-be430-657bc8887b-gmv4m" Apr 23 18:12:19.593459 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:19.593337 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/94cc07fa-51f6-458a-868d-d9f708940d34-proxy-tls\") pod \"switch-graph-be430-657bc8887b-gmv4m\" (UID: \"94cc07fa-51f6-458a-868d-d9f708940d34\") " pod="kserve-ci-e2e-test/switch-graph-be430-657bc8887b-gmv4m" Apr 23 18:12:19.593459 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:12:19.593429 2572 secret.go:189] Couldn't get secret kserve-ci-e2e-test/switch-graph-be430-serving-cert: secret "switch-graph-be430-serving-cert" not found Apr 23 18:12:19.593543 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:12:19.593493 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/94cc07fa-51f6-458a-868d-d9f708940d34-proxy-tls podName:94cc07fa-51f6-458a-868d-d9f708940d34 nodeName:}" failed. No retries permitted until 2026-04-23 18:12:20.093476508 +0000 UTC m=+847.391722619 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/94cc07fa-51f6-458a-868d-d9f708940d34-proxy-tls") pod "switch-graph-be430-657bc8887b-gmv4m" (UID: "94cc07fa-51f6-458a-868d-d9f708940d34") : secret "switch-graph-be430-serving-cert" not found Apr 23 18:12:19.593911 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:19.593891 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94cc07fa-51f6-458a-868d-d9f708940d34-openshift-service-ca-bundle\") pod \"switch-graph-be430-657bc8887b-gmv4m\" (UID: \"94cc07fa-51f6-458a-868d-d9f708940d34\") " pod="kserve-ci-e2e-test/switch-graph-be430-657bc8887b-gmv4m" Apr 23 18:12:20.096887 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:20.096849 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/94cc07fa-51f6-458a-868d-d9f708940d34-proxy-tls\") pod \"switch-graph-be430-657bc8887b-gmv4m\" (UID: \"94cc07fa-51f6-458a-868d-d9f708940d34\") " pod="kserve-ci-e2e-test/switch-graph-be430-657bc8887b-gmv4m" Apr 23 18:12:20.099228 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:20.099200 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/94cc07fa-51f6-458a-868d-d9f708940d34-proxy-tls\") pod \"switch-graph-be430-657bc8887b-gmv4m\" (UID: \"94cc07fa-51f6-458a-868d-d9f708940d34\") " pod="kserve-ci-e2e-test/switch-graph-be430-657bc8887b-gmv4m" Apr 23 18:12:20.322089 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:20.322039 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-be430-657bc8887b-gmv4m" Apr 23 18:12:20.442710 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:20.442690 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/switch-graph-be430-657bc8887b-gmv4m"] Apr 23 18:12:20.445303 ip-10-0-130-162 kubenswrapper[2572]: W0423 18:12:20.445278 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94cc07fa_51f6_458a_868d_d9f708940d34.slice/crio-7f7116e89fb23093b46a74929d284aeb534fe1fd73d6bdb7d8d4ef084f1c5932 WatchSource:0}: Error finding container 7f7116e89fb23093b46a74929d284aeb534fe1fd73d6bdb7d8d4ef084f1c5932: Status 404 returned error can't find the container with id 7f7116e89fb23093b46a74929d284aeb534fe1fd73d6bdb7d8d4ef084f1c5932 Apr 23 18:12:20.729827 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:20.729744 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-be430-657bc8887b-gmv4m" event={"ID":"94cc07fa-51f6-458a-868d-d9f708940d34","Type":"ContainerStarted","Data":"7f7116e89fb23093b46a74929d284aeb534fe1fd73d6bdb7d8d4ef084f1c5932"} Apr 23 18:12:22.605934 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:22.605866 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" podUID="04269895-41e9-4b40-a921-26abf7648545" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.20:8080: connect: connection refused" Apr 23 18:12:22.606420 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:22.606248 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" podUID="f760979b-5b29-4f7b-99b5-00538b24bd1d" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.21:8080: connect: connection refused" Apr 23 18:12:23.741254 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:23.741218 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-be430-657bc8887b-gmv4m" event={"ID":"94cc07fa-51f6-458a-868d-d9f708940d34","Type":"ContainerStarted","Data":"dbfea9d7d5e9f085cb308ec6e0a6fe00b46fdd3a54166c62303299868596a07f"} Apr 23 18:12:23.741651 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:23.741376 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/switch-graph-be430-657bc8887b-gmv4m" Apr 23 18:12:23.759769 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:23.759718 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/switch-graph-be430-657bc8887b-gmv4m" podStartSLOduration=2.3571132710000002 podStartE2EDuration="4.759701874s" podCreationTimestamp="2026-04-23 18:12:19 +0000 UTC" firstStartedPulling="2026-04-23 18:12:20.447658734 +0000 UTC m=+847.745904844" lastFinishedPulling="2026-04-23 18:12:22.850247332 +0000 UTC m=+850.148493447" observedRunningTime="2026-04-23 18:12:23.757275478 +0000 UTC m=+851.055521610" watchObservedRunningTime="2026-04-23 18:12:23.759701874 +0000 UTC m=+851.057948010" Apr 23 18:12:29.751582 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:29.751549 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/switch-graph-be430-657bc8887b-gmv4m" Apr 23 18:12:32.606095 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:32.606051 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" podUID="04269895-41e9-4b40-a921-26abf7648545" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.20:8080: connect: connection refused" Apr 23 18:12:32.606574 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:32.606197 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" podUID="f760979b-5b29-4f7b-99b5-00538b24bd1d" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.21:8080: connect: connection refused" Apr 23 18:12:33.529375 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:33.529342 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/switch-graph-be430-657bc8887b-gmv4m"] Apr 23 18:12:33.529651 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:33.529609 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/switch-graph-be430-657bc8887b-gmv4m" podUID="94cc07fa-51f6-458a-868d-d9f708940d34" containerName="switch-graph-be430" containerID="cri-o://dbfea9d7d5e9f085cb308ec6e0a6fe00b46fdd3a54166c62303299868596a07f" gracePeriod=30 Apr 23 18:12:34.748753 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:34.748711 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-be430-657bc8887b-gmv4m" podUID="94cc07fa-51f6-458a-868d-d9f708940d34" containerName="switch-graph-be430" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:12:39.749609 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:39.749565 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-be430-657bc8887b-gmv4m" podUID="94cc07fa-51f6-458a-868d-d9f708940d34" containerName="switch-graph-be430" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:12:42.606390 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:42.606357 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" Apr 23 18:12:42.607282 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:42.607264 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" Apr 23 18:12:44.748692 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:44.748652 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-be430-657bc8887b-gmv4m" podUID="94cc07fa-51f6-458a-868d-d9f708940d34" containerName="switch-graph-be430" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:12:44.749127 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:44.748759 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/switch-graph-be430-657bc8887b-gmv4m" Apr 23 18:12:49.749148 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:49.749113 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-be430-657bc8887b-gmv4m" podUID="94cc07fa-51f6-458a-868d-d9f708940d34" containerName="switch-graph-be430" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:12:54.749074 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:54.748985 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-be430-657bc8887b-gmv4m" podUID="94cc07fa-51f6-458a-868d-d9f708940d34" containerName="switch-graph-be430" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:12:59.749504 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:12:59.749466 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-be430-657bc8887b-gmv4m" podUID="94cc07fa-51f6-458a-868d-d9f708940d34" containerName="switch-graph-be430" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:13:03.864583 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:03.864491 2572 generic.go:358] "Generic (PLEG): container finished" podID="94cc07fa-51f6-458a-868d-d9f708940d34" containerID="dbfea9d7d5e9f085cb308ec6e0a6fe00b46fdd3a54166c62303299868596a07f" exitCode=0 Apr 23 18:13:03.864583 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:03.864565 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-be430-657bc8887b-gmv4m" event={"ID":"94cc07fa-51f6-458a-868d-d9f708940d34","Type":"ContainerDied","Data":"dbfea9d7d5e9f085cb308ec6e0a6fe00b46fdd3a54166c62303299868596a07f"} Apr 23 18:13:04.172377 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:04.172355 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-be430-657bc8887b-gmv4m" Apr 23 18:13:04.213232 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:04.213206 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/94cc07fa-51f6-458a-868d-d9f708940d34-proxy-tls\") pod \"94cc07fa-51f6-458a-868d-d9f708940d34\" (UID: \"94cc07fa-51f6-458a-868d-d9f708940d34\") " Apr 23 18:13:04.213379 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:04.213241 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94cc07fa-51f6-458a-868d-d9f708940d34-openshift-service-ca-bundle\") pod \"94cc07fa-51f6-458a-868d-d9f708940d34\" (UID: \"94cc07fa-51f6-458a-868d-d9f708940d34\") " Apr 23 18:13:04.213620 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:04.213597 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94cc07fa-51f6-458a-868d-d9f708940d34-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "94cc07fa-51f6-458a-868d-d9f708940d34" (UID: "94cc07fa-51f6-458a-868d-d9f708940d34"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:13:04.215243 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:04.215218 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94cc07fa-51f6-458a-868d-d9f708940d34-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "94cc07fa-51f6-458a-868d-d9f708940d34" (UID: "94cc07fa-51f6-458a-868d-d9f708940d34"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:13:04.313961 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:04.313918 2572 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/94cc07fa-51f6-458a-868d-d9f708940d34-proxy-tls\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:13:04.313961 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:04.313958 2572 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94cc07fa-51f6-458a-868d-d9f708940d34-openshift-service-ca-bundle\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:13:04.868885 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:04.868855 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-be430-657bc8887b-gmv4m" Apr 23 18:13:04.868885 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:04.868864 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-be430-657bc8887b-gmv4m" event={"ID":"94cc07fa-51f6-458a-868d-d9f708940d34","Type":"ContainerDied","Data":"7f7116e89fb23093b46a74929d284aeb534fe1fd73d6bdb7d8d4ef084f1c5932"} Apr 23 18:13:04.869423 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:04.868905 2572 scope.go:117] "RemoveContainer" containerID="dbfea9d7d5e9f085cb308ec6e0a6fe00b46fdd3a54166c62303299868596a07f" Apr 23 18:13:04.891119 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:04.891096 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/switch-graph-be430-657bc8887b-gmv4m"] Apr 23 18:13:04.893023 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:04.893001 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/switch-graph-be430-657bc8887b-gmv4m"] Apr 23 18:13:05.308024 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:05.307989 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94cc07fa-51f6-458a-868d-d9f708940d34" path="/var/lib/kubelet/pods/94cc07fa-51f6-458a-868d-d9f708940d34/volumes" Apr 23 18:13:09.364598 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:09.364566 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/model-chainer-8698b786b7-kbqbh"] Apr 23 18:13:09.364994 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:09.364863 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="94cc07fa-51f6-458a-868d-d9f708940d34" containerName="switch-graph-be430" Apr 23 18:13:09.364994 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:09.364875 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="94cc07fa-51f6-458a-868d-d9f708940d34" containerName="switch-graph-be430" Apr 23 18:13:09.364994 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:09.364925 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="94cc07fa-51f6-458a-868d-d9f708940d34" containerName="switch-graph-be430" Apr 23 18:13:09.369458 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:09.369437 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-8698b786b7-kbqbh" Apr 23 18:13:09.371924 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:09.371904 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"model-chainer-serving-cert\"" Apr 23 18:13:09.372011 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:09.371904 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"model-chainer-kube-rbac-proxy-sar-config\"" Apr 23 18:13:09.374772 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:09.374751 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/model-chainer-8698b786b7-kbqbh"] Apr 23 18:13:09.451238 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:09.451204 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/90caca3c-a394-4fc1-b5c1-0622d17c4e62-proxy-tls\") pod \"model-chainer-8698b786b7-kbqbh\" (UID: \"90caca3c-a394-4fc1-b5c1-0622d17c4e62\") " pod="kserve-ci-e2e-test/model-chainer-8698b786b7-kbqbh" Apr 23 18:13:09.451397 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:09.451253 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90caca3c-a394-4fc1-b5c1-0622d17c4e62-openshift-service-ca-bundle\") pod \"model-chainer-8698b786b7-kbqbh\" (UID: \"90caca3c-a394-4fc1-b5c1-0622d17c4e62\") " pod="kserve-ci-e2e-test/model-chainer-8698b786b7-kbqbh" Apr 23 18:13:09.552622 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:09.552587 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/90caca3c-a394-4fc1-b5c1-0622d17c4e62-proxy-tls\") pod \"model-chainer-8698b786b7-kbqbh\" (UID: \"90caca3c-a394-4fc1-b5c1-0622d17c4e62\") " pod="kserve-ci-e2e-test/model-chainer-8698b786b7-kbqbh" Apr 23 18:13:09.552755 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:09.552633 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90caca3c-a394-4fc1-b5c1-0622d17c4e62-openshift-service-ca-bundle\") pod \"model-chainer-8698b786b7-kbqbh\" (UID: \"90caca3c-a394-4fc1-b5c1-0622d17c4e62\") " pod="kserve-ci-e2e-test/model-chainer-8698b786b7-kbqbh" Apr 23 18:13:09.552755 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:13:09.552738 2572 secret.go:189] Couldn't get secret kserve-ci-e2e-test/model-chainer-serving-cert: secret "model-chainer-serving-cert" not found Apr 23 18:13:09.552839 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:13:09.552805 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90caca3c-a394-4fc1-b5c1-0622d17c4e62-proxy-tls podName:90caca3c-a394-4fc1-b5c1-0622d17c4e62 nodeName:}" failed. No retries permitted until 2026-04-23 18:13:10.052788622 +0000 UTC m=+897.351034732 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/90caca3c-a394-4fc1-b5c1-0622d17c4e62-proxy-tls") pod "model-chainer-8698b786b7-kbqbh" (UID: "90caca3c-a394-4fc1-b5c1-0622d17c4e62") : secret "model-chainer-serving-cert" not found Apr 23 18:13:09.553273 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:09.553255 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90caca3c-a394-4fc1-b5c1-0622d17c4e62-openshift-service-ca-bundle\") pod \"model-chainer-8698b786b7-kbqbh\" (UID: \"90caca3c-a394-4fc1-b5c1-0622d17c4e62\") " pod="kserve-ci-e2e-test/model-chainer-8698b786b7-kbqbh" Apr 23 18:13:10.057075 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:10.057040 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/90caca3c-a394-4fc1-b5c1-0622d17c4e62-proxy-tls\") pod \"model-chainer-8698b786b7-kbqbh\" (UID: \"90caca3c-a394-4fc1-b5c1-0622d17c4e62\") " pod="kserve-ci-e2e-test/model-chainer-8698b786b7-kbqbh" Apr 23 18:13:10.059666 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:10.059641 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/90caca3c-a394-4fc1-b5c1-0622d17c4e62-proxy-tls\") pod \"model-chainer-8698b786b7-kbqbh\" (UID: \"90caca3c-a394-4fc1-b5c1-0622d17c4e62\") " pod="kserve-ci-e2e-test/model-chainer-8698b786b7-kbqbh" Apr 23 18:13:10.280387 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:10.280357 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-8698b786b7-kbqbh" Apr 23 18:13:10.393622 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:10.393595 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/model-chainer-8698b786b7-kbqbh"] Apr 23 18:13:10.397935 ip-10-0-130-162 kubenswrapper[2572]: W0423 18:13:10.397906 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90caca3c_a394_4fc1_b5c1_0622d17c4e62.slice/crio-bcfb64dc4bc82f3a957391a790a8489494764367fd0e1e9e7fd83f988eae40cb WatchSource:0}: Error finding container bcfb64dc4bc82f3a957391a790a8489494764367fd0e1e9e7fd83f988eae40cb: Status 404 returned error can't find the container with id bcfb64dc4bc82f3a957391a790a8489494764367fd0e1e9e7fd83f988eae40cb Apr 23 18:13:10.399722 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:10.399703 2572 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 18:13:10.890788 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:10.890753 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-8698b786b7-kbqbh" event={"ID":"90caca3c-a394-4fc1-b5c1-0622d17c4e62","Type":"ContainerStarted","Data":"7976ac0bb733bc5ca2bf037ae146fa62553f148f80c9c1236c03dafa5ddd8ef2"} Apr 23 18:13:10.890952 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:10.890793 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-8698b786b7-kbqbh" event={"ID":"90caca3c-a394-4fc1-b5c1-0622d17c4e62","Type":"ContainerStarted","Data":"bcfb64dc4bc82f3a957391a790a8489494764367fd0e1e9e7fd83f988eae40cb"} Apr 23 18:13:10.890952 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:10.890832 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/model-chainer-8698b786b7-kbqbh" Apr 23 18:13:10.908421 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:10.908382 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/model-chainer-8698b786b7-kbqbh" podStartSLOduration=1.908368375 podStartE2EDuration="1.908368375s" podCreationTimestamp="2026-04-23 18:13:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:13:10.907026837 +0000 UTC m=+898.205272971" watchObservedRunningTime="2026-04-23 18:13:10.908368375 +0000 UTC m=+898.206614508" Apr 23 18:13:16.898796 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:16.898759 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/model-chainer-8698b786b7-kbqbh" Apr 23 18:13:19.461775 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:19.461735 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/model-chainer-8698b786b7-kbqbh"] Apr 23 18:13:19.462176 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:19.462000 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/model-chainer-8698b786b7-kbqbh" podUID="90caca3c-a394-4fc1-b5c1-0622d17c4e62" containerName="model-chainer" containerID="cri-o://7976ac0bb733bc5ca2bf037ae146fa62553f148f80c9c1236c03dafa5ddd8ef2" gracePeriod=30 Apr 23 18:13:19.586494 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:19.586407 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg"] Apr 23 18:13:19.586849 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:19.586797 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" podUID="04269895-41e9-4b40-a921-26abf7648545" containerName="kserve-container" containerID="cri-o://c171b2b476f7a1602897cc3faa4f40ace632402bf8fdfd8989341a60370a7983" gracePeriod=30 Apr 23 18:13:19.586980 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:19.586841 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" podUID="04269895-41e9-4b40-a921-26abf7648545" containerName="kube-rbac-proxy" containerID="cri-o://b5e6de4cabf5a3e50c268386ce0a2839f14d5120c77c5643ae8f10088457b192" gracePeriod=30 Apr 23 18:13:19.637755 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:19.637720 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m"] Apr 23 18:13:19.638074 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:19.638029 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" podUID="f760979b-5b29-4f7b-99b5-00538b24bd1d" containerName="kserve-container" containerID="cri-o://0a53882baed6bda5f565a4587acd839dd14776c57325028dccfd19f4fd239fd3" gracePeriod=30 Apr 23 18:13:19.638145 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:19.638081 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" podUID="f760979b-5b29-4f7b-99b5-00538b24bd1d" containerName="kube-rbac-proxy" containerID="cri-o://77c4fdc10ee22b1b0143c052dcc842ae74cfba5bdea3c5692ad0e1b5454f3c6c" gracePeriod=30 Apr 23 18:13:19.919709 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:19.919678 2572 generic.go:358] "Generic (PLEG): container finished" podID="04269895-41e9-4b40-a921-26abf7648545" containerID="b5e6de4cabf5a3e50c268386ce0a2839f14d5120c77c5643ae8f10088457b192" exitCode=2 Apr 23 18:13:19.919865 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:19.919751 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" event={"ID":"04269895-41e9-4b40-a921-26abf7648545","Type":"ContainerDied","Data":"b5e6de4cabf5a3e50c268386ce0a2839f14d5120c77c5643ae8f10088457b192"} Apr 23 18:13:19.921529 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:19.921506 2572 generic.go:358] "Generic (PLEG): container finished" podID="f760979b-5b29-4f7b-99b5-00538b24bd1d" containerID="77c4fdc10ee22b1b0143c052dcc842ae74cfba5bdea3c5692ad0e1b5454f3c6c" exitCode=2 Apr 23 18:13:19.921639 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:19.921574 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" event={"ID":"f760979b-5b29-4f7b-99b5-00538b24bd1d","Type":"ContainerDied","Data":"77c4fdc10ee22b1b0143c052dcc842ae74cfba5bdea3c5692ad0e1b5454f3c6c"} Apr 23 18:13:21.897449 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:21.897408 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-8698b786b7-kbqbh" podUID="90caca3c-a394-4fc1-b5c1-0622d17c4e62" containerName="model-chainer" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:13:22.601718 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:22.601670 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" podUID="f760979b-5b29-4f7b-99b5-00538b24bd1d" containerName="kube-rbac-proxy" probeResult="failure" output="Get \"https://10.134.0.21:8643/healthz\": dial tcp 10.134.0.21:8643: connect: connection refused" Apr 23 18:13:22.601913 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:22.601670 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" podUID="04269895-41e9-4b40-a921-26abf7648545" containerName="kube-rbac-proxy" probeResult="failure" output="Get \"https://10.134.0.20:8643/healthz\": dial tcp 10.134.0.20:8643: connect: connection refused" Apr 23 18:13:22.605932 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:22.605905 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" podUID="04269895-41e9-4b40-a921-26abf7648545" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.20:8080: connect: connection refused" Apr 23 18:13:22.606201 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:22.606182 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" podUID="f760979b-5b29-4f7b-99b5-00538b24bd1d" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.21:8080: connect: connection refused" Apr 23 18:13:23.175236 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.175212 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" Apr 23 18:13:23.253678 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.253647 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6696\" (UniqueName: \"kubernetes.io/projected/f760979b-5b29-4f7b-99b5-00538b24bd1d-kube-api-access-k6696\") pod \"f760979b-5b29-4f7b-99b5-00538b24bd1d\" (UID: \"f760979b-5b29-4f7b-99b5-00538b24bd1d\") " Apr 23 18:13:23.253832 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.253708 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/f760979b-5b29-4f7b-99b5-00538b24bd1d-kserve-provision-location\") pod \"f760979b-5b29-4f7b-99b5-00538b24bd1d\" (UID: \"f760979b-5b29-4f7b-99b5-00538b24bd1d\") " Apr 23 18:13:23.253832 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.253730 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f760979b-5b29-4f7b-99b5-00538b24bd1d-proxy-tls\") pod \"f760979b-5b29-4f7b-99b5-00538b24bd1d\" (UID: \"f760979b-5b29-4f7b-99b5-00538b24bd1d\") " Apr 23 18:13:23.253832 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.253766 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"isvc-xgboost-graph-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/f760979b-5b29-4f7b-99b5-00538b24bd1d-isvc-xgboost-graph-kube-rbac-proxy-sar-config\") pod \"f760979b-5b29-4f7b-99b5-00538b24bd1d\" (UID: \"f760979b-5b29-4f7b-99b5-00538b24bd1d\") " Apr 23 18:13:23.254012 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.253988 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f760979b-5b29-4f7b-99b5-00538b24bd1d-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "f760979b-5b29-4f7b-99b5-00538b24bd1d" (UID: "f760979b-5b29-4f7b-99b5-00538b24bd1d"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 23 18:13:23.254109 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.254089 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f760979b-5b29-4f7b-99b5-00538b24bd1d-isvc-xgboost-graph-kube-rbac-proxy-sar-config" (OuterVolumeSpecName: "isvc-xgboost-graph-kube-rbac-proxy-sar-config") pod "f760979b-5b29-4f7b-99b5-00538b24bd1d" (UID: "f760979b-5b29-4f7b-99b5-00538b24bd1d"). InnerVolumeSpecName "isvc-xgboost-graph-kube-rbac-proxy-sar-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:13:23.255720 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.255690 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f760979b-5b29-4f7b-99b5-00538b24bd1d-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "f760979b-5b29-4f7b-99b5-00538b24bd1d" (UID: "f760979b-5b29-4f7b-99b5-00538b24bd1d"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:13:23.255836 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.255735 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f760979b-5b29-4f7b-99b5-00538b24bd1d-kube-api-access-k6696" (OuterVolumeSpecName: "kube-api-access-k6696") pod "f760979b-5b29-4f7b-99b5-00538b24bd1d" (UID: "f760979b-5b29-4f7b-99b5-00538b24bd1d"). InnerVolumeSpecName "kube-api-access-k6696". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:13:23.354907 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.354877 2572 reconciler_common.go:299] "Volume detached for volume \"isvc-xgboost-graph-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/f760979b-5b29-4f7b-99b5-00538b24bd1d-isvc-xgboost-graph-kube-rbac-proxy-sar-config\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:13:23.354907 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.354906 2572 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k6696\" (UniqueName: \"kubernetes.io/projected/f760979b-5b29-4f7b-99b5-00538b24bd1d-kube-api-access-k6696\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:13:23.354907 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.354916 2572 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/f760979b-5b29-4f7b-99b5-00538b24bd1d-kserve-provision-location\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:13:23.355112 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.354925 2572 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f760979b-5b29-4f7b-99b5-00538b24bd1d-proxy-tls\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:13:23.820292 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.820267 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" Apr 23 18:13:23.858521 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.858495 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hmml\" (UniqueName: \"kubernetes.io/projected/04269895-41e9-4b40-a921-26abf7648545-kube-api-access-9hmml\") pod \"04269895-41e9-4b40-a921-26abf7648545\" (UID: \"04269895-41e9-4b40-a921-26abf7648545\") " Apr 23 18:13:23.858663 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.858533 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"isvc-sklearn-graph-1-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/04269895-41e9-4b40-a921-26abf7648545-isvc-sklearn-graph-1-kube-rbac-proxy-sar-config\") pod \"04269895-41e9-4b40-a921-26abf7648545\" (UID: \"04269895-41e9-4b40-a921-26abf7648545\") " Apr 23 18:13:23.858663 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.858578 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/04269895-41e9-4b40-a921-26abf7648545-proxy-tls\") pod \"04269895-41e9-4b40-a921-26abf7648545\" (UID: \"04269895-41e9-4b40-a921-26abf7648545\") " Apr 23 18:13:23.858663 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.858613 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/04269895-41e9-4b40-a921-26abf7648545-kserve-provision-location\") pod \"04269895-41e9-4b40-a921-26abf7648545\" (UID: \"04269895-41e9-4b40-a921-26abf7648545\") " Apr 23 18:13:23.858993 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.858967 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04269895-41e9-4b40-a921-26abf7648545-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "04269895-41e9-4b40-a921-26abf7648545" (UID: "04269895-41e9-4b40-a921-26abf7648545"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 23 18:13:23.858993 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.858965 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04269895-41e9-4b40-a921-26abf7648545-isvc-sklearn-graph-1-kube-rbac-proxy-sar-config" (OuterVolumeSpecName: "isvc-sklearn-graph-1-kube-rbac-proxy-sar-config") pod "04269895-41e9-4b40-a921-26abf7648545" (UID: "04269895-41e9-4b40-a921-26abf7648545"). InnerVolumeSpecName "isvc-sklearn-graph-1-kube-rbac-proxy-sar-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:13:23.860609 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.860585 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04269895-41e9-4b40-a921-26abf7648545-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "04269895-41e9-4b40-a921-26abf7648545" (UID: "04269895-41e9-4b40-a921-26abf7648545"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:13:23.860683 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.860619 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04269895-41e9-4b40-a921-26abf7648545-kube-api-access-9hmml" (OuterVolumeSpecName: "kube-api-access-9hmml") pod "04269895-41e9-4b40-a921-26abf7648545" (UID: "04269895-41e9-4b40-a921-26abf7648545"). InnerVolumeSpecName "kube-api-access-9hmml". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:13:23.937512 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.937434 2572 generic.go:358] "Generic (PLEG): container finished" podID="04269895-41e9-4b40-a921-26abf7648545" containerID="c171b2b476f7a1602897cc3faa4f40ace632402bf8fdfd8989341a60370a7983" exitCode=0 Apr 23 18:13:23.937655 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.937527 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" event={"ID":"04269895-41e9-4b40-a921-26abf7648545","Type":"ContainerDied","Data":"c171b2b476f7a1602897cc3faa4f40ace632402bf8fdfd8989341a60370a7983"} Apr 23 18:13:23.937655 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.937550 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" event={"ID":"04269895-41e9-4b40-a921-26abf7648545","Type":"ContainerDied","Data":"73065331ab3ec4e9db5978645e0fed5d15ff9870bd5de707680d9f61b2a24014"} Apr 23 18:13:23.937655 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.937564 2572 scope.go:117] "RemoveContainer" containerID="b5e6de4cabf5a3e50c268386ce0a2839f14d5120c77c5643ae8f10088457b192" Apr 23 18:13:23.937655 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.937581 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg" Apr 23 18:13:23.939191 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.939170 2572 generic.go:358] "Generic (PLEG): container finished" podID="f760979b-5b29-4f7b-99b5-00538b24bd1d" containerID="0a53882baed6bda5f565a4587acd839dd14776c57325028dccfd19f4fd239fd3" exitCode=0 Apr 23 18:13:23.939312 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.939206 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" event={"ID":"f760979b-5b29-4f7b-99b5-00538b24bd1d","Type":"ContainerDied","Data":"0a53882baed6bda5f565a4587acd839dd14776c57325028dccfd19f4fd239fd3"} Apr 23 18:13:23.939312 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.939227 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" event={"ID":"f760979b-5b29-4f7b-99b5-00538b24bd1d","Type":"ContainerDied","Data":"3e6020979897c442fbdc29ba6552c9a3424c1e8b7b8abbc0eab6ba7a6d1a5d90"} Apr 23 18:13:23.939312 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.939247 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m" Apr 23 18:13:23.951852 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.951834 2572 scope.go:117] "RemoveContainer" containerID="c171b2b476f7a1602897cc3faa4f40ace632402bf8fdfd8989341a60370a7983" Apr 23 18:13:23.958733 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.958714 2572 scope.go:117] "RemoveContainer" containerID="4c3c668dc1bed2559d1b3e6eba5d5735dd1cecb4e5a404fe6168bc0ce83b1fce" Apr 23 18:13:23.959008 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.958991 2572 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/04269895-41e9-4b40-a921-26abf7648545-kserve-provision-location\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:13:23.959080 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.959014 2572 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9hmml\" (UniqueName: \"kubernetes.io/projected/04269895-41e9-4b40-a921-26abf7648545-kube-api-access-9hmml\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:13:23.959080 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.959033 2572 reconciler_common.go:299] "Volume detached for volume \"isvc-sklearn-graph-1-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/04269895-41e9-4b40-a921-26abf7648545-isvc-sklearn-graph-1-kube-rbac-proxy-sar-config\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:13:23.959080 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.959049 2572 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/04269895-41e9-4b40-a921-26abf7648545-proxy-tls\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:13:23.966052 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.966033 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m"] Apr 23 18:13:23.966100 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.966063 2572 scope.go:117] "RemoveContainer" containerID="b5e6de4cabf5a3e50c268386ce0a2839f14d5120c77c5643ae8f10088457b192" Apr 23 18:13:23.966312 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:13:23.966295 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5e6de4cabf5a3e50c268386ce0a2839f14d5120c77c5643ae8f10088457b192\": container with ID starting with b5e6de4cabf5a3e50c268386ce0a2839f14d5120c77c5643ae8f10088457b192 not found: ID does not exist" containerID="b5e6de4cabf5a3e50c268386ce0a2839f14d5120c77c5643ae8f10088457b192" Apr 23 18:13:23.966373 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.966339 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5e6de4cabf5a3e50c268386ce0a2839f14d5120c77c5643ae8f10088457b192"} err="failed to get container status \"b5e6de4cabf5a3e50c268386ce0a2839f14d5120c77c5643ae8f10088457b192\": rpc error: code = NotFound desc = could not find container \"b5e6de4cabf5a3e50c268386ce0a2839f14d5120c77c5643ae8f10088457b192\": container with ID starting with b5e6de4cabf5a3e50c268386ce0a2839f14d5120c77c5643ae8f10088457b192 not found: ID does not exist" Apr 23 18:13:23.966373 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.966357 2572 scope.go:117] "RemoveContainer" containerID="c171b2b476f7a1602897cc3faa4f40ace632402bf8fdfd8989341a60370a7983" Apr 23 18:13:23.966613 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:13:23.966597 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c171b2b476f7a1602897cc3faa4f40ace632402bf8fdfd8989341a60370a7983\": container with ID starting with c171b2b476f7a1602897cc3faa4f40ace632402bf8fdfd8989341a60370a7983 not found: ID does not exist" containerID="c171b2b476f7a1602897cc3faa4f40ace632402bf8fdfd8989341a60370a7983" Apr 23 18:13:23.966648 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.966618 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c171b2b476f7a1602897cc3faa4f40ace632402bf8fdfd8989341a60370a7983"} err="failed to get container status \"c171b2b476f7a1602897cc3faa4f40ace632402bf8fdfd8989341a60370a7983\": rpc error: code = NotFound desc = could not find container \"c171b2b476f7a1602897cc3faa4f40ace632402bf8fdfd8989341a60370a7983\": container with ID starting with c171b2b476f7a1602897cc3faa4f40ace632402bf8fdfd8989341a60370a7983 not found: ID does not exist" Apr 23 18:13:23.966648 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.966634 2572 scope.go:117] "RemoveContainer" containerID="4c3c668dc1bed2559d1b3e6eba5d5735dd1cecb4e5a404fe6168bc0ce83b1fce" Apr 23 18:13:23.966828 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:13:23.966813 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c3c668dc1bed2559d1b3e6eba5d5735dd1cecb4e5a404fe6168bc0ce83b1fce\": container with ID starting with 4c3c668dc1bed2559d1b3e6eba5d5735dd1cecb4e5a404fe6168bc0ce83b1fce not found: ID does not exist" containerID="4c3c668dc1bed2559d1b3e6eba5d5735dd1cecb4e5a404fe6168bc0ce83b1fce" Apr 23 18:13:23.966860 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.966832 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c3c668dc1bed2559d1b3e6eba5d5735dd1cecb4e5a404fe6168bc0ce83b1fce"} err="failed to get container status \"4c3c668dc1bed2559d1b3e6eba5d5735dd1cecb4e5a404fe6168bc0ce83b1fce\": rpc error: code = NotFound desc = could not find container \"4c3c668dc1bed2559d1b3e6eba5d5735dd1cecb4e5a404fe6168bc0ce83b1fce\": container with ID starting with 4c3c668dc1bed2559d1b3e6eba5d5735dd1cecb4e5a404fe6168bc0ce83b1fce not found: ID does not exist" Apr 23 18:13:23.966860 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.966846 2572 scope.go:117] "RemoveContainer" containerID="77c4fdc10ee22b1b0143c052dcc842ae74cfba5bdea3c5692ad0e1b5454f3c6c" Apr 23 18:13:23.971146 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.971123 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-htg4m"] Apr 23 18:13:23.973867 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.973852 2572 scope.go:117] "RemoveContainer" containerID="0a53882baed6bda5f565a4587acd839dd14776c57325028dccfd19f4fd239fd3" Apr 23 18:13:23.980047 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.980032 2572 scope.go:117] "RemoveContainer" containerID="bf1d7d49ce3dc2cede3cdaf5aeb0d4c4d57f049f70b26a01921eb28fd8025b42" Apr 23 18:13:23.983132 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.983113 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg"] Apr 23 18:13:23.987374 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.987350 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-1-predictor-588b77b98-h9jfg"] Apr 23 18:13:23.987810 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.987793 2572 scope.go:117] "RemoveContainer" containerID="77c4fdc10ee22b1b0143c052dcc842ae74cfba5bdea3c5692ad0e1b5454f3c6c" Apr 23 18:13:23.988072 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:13:23.988055 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77c4fdc10ee22b1b0143c052dcc842ae74cfba5bdea3c5692ad0e1b5454f3c6c\": container with ID starting with 77c4fdc10ee22b1b0143c052dcc842ae74cfba5bdea3c5692ad0e1b5454f3c6c not found: ID does not exist" containerID="77c4fdc10ee22b1b0143c052dcc842ae74cfba5bdea3c5692ad0e1b5454f3c6c" Apr 23 18:13:23.988113 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.988080 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77c4fdc10ee22b1b0143c052dcc842ae74cfba5bdea3c5692ad0e1b5454f3c6c"} err="failed to get container status \"77c4fdc10ee22b1b0143c052dcc842ae74cfba5bdea3c5692ad0e1b5454f3c6c\": rpc error: code = NotFound desc = could not find container \"77c4fdc10ee22b1b0143c052dcc842ae74cfba5bdea3c5692ad0e1b5454f3c6c\": container with ID starting with 77c4fdc10ee22b1b0143c052dcc842ae74cfba5bdea3c5692ad0e1b5454f3c6c not found: ID does not exist" Apr 23 18:13:23.988113 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.988098 2572 scope.go:117] "RemoveContainer" containerID="0a53882baed6bda5f565a4587acd839dd14776c57325028dccfd19f4fd239fd3" Apr 23 18:13:23.988315 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:13:23.988300 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a53882baed6bda5f565a4587acd839dd14776c57325028dccfd19f4fd239fd3\": container with ID starting with 0a53882baed6bda5f565a4587acd839dd14776c57325028dccfd19f4fd239fd3 not found: ID does not exist" containerID="0a53882baed6bda5f565a4587acd839dd14776c57325028dccfd19f4fd239fd3" Apr 23 18:13:23.988379 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.988334 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a53882baed6bda5f565a4587acd839dd14776c57325028dccfd19f4fd239fd3"} err="failed to get container status \"0a53882baed6bda5f565a4587acd839dd14776c57325028dccfd19f4fd239fd3\": rpc error: code = NotFound desc = could not find container \"0a53882baed6bda5f565a4587acd839dd14776c57325028dccfd19f4fd239fd3\": container with ID starting with 0a53882baed6bda5f565a4587acd839dd14776c57325028dccfd19f4fd239fd3 not found: ID does not exist" Apr 23 18:13:23.988379 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.988355 2572 scope.go:117] "RemoveContainer" containerID="bf1d7d49ce3dc2cede3cdaf5aeb0d4c4d57f049f70b26a01921eb28fd8025b42" Apr 23 18:13:23.988583 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:13:23.988567 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf1d7d49ce3dc2cede3cdaf5aeb0d4c4d57f049f70b26a01921eb28fd8025b42\": container with ID starting with bf1d7d49ce3dc2cede3cdaf5aeb0d4c4d57f049f70b26a01921eb28fd8025b42 not found: ID does not exist" containerID="bf1d7d49ce3dc2cede3cdaf5aeb0d4c4d57f049f70b26a01921eb28fd8025b42" Apr 23 18:13:23.988631 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:23.988588 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf1d7d49ce3dc2cede3cdaf5aeb0d4c4d57f049f70b26a01921eb28fd8025b42"} err="failed to get container status \"bf1d7d49ce3dc2cede3cdaf5aeb0d4c4d57f049f70b26a01921eb28fd8025b42\": rpc error: code = NotFound desc = could not find container \"bf1d7d49ce3dc2cede3cdaf5aeb0d4c4d57f049f70b26a01921eb28fd8025b42\": container with ID starting with bf1d7d49ce3dc2cede3cdaf5aeb0d4c4d57f049f70b26a01921eb28fd8025b42 not found: ID does not exist" Apr 23 18:13:25.308341 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:25.308281 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04269895-41e9-4b40-a921-26abf7648545" path="/var/lib/kubelet/pods/04269895-41e9-4b40-a921-26abf7648545/volumes" Apr 23 18:13:25.308994 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:25.308972 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f760979b-5b29-4f7b-99b5-00538b24bd1d" path="/var/lib/kubelet/pods/f760979b-5b29-4f7b-99b5-00538b24bd1d/volumes" Apr 23 18:13:26.897702 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:26.897661 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-8698b786b7-kbqbh" podUID="90caca3c-a394-4fc1-b5c1-0622d17c4e62" containerName="model-chainer" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:13:31.897919 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:31.897880 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-8698b786b7-kbqbh" podUID="90caca3c-a394-4fc1-b5c1-0622d17c4e62" containerName="model-chainer" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:13:31.898354 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:31.898033 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/model-chainer-8698b786b7-kbqbh" Apr 23 18:13:36.897539 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:36.897500 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-8698b786b7-kbqbh" podUID="90caca3c-a394-4fc1-b5c1-0622d17c4e62" containerName="model-chainer" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:13:41.897954 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:41.897910 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-8698b786b7-kbqbh" podUID="90caca3c-a394-4fc1-b5c1-0622d17c4e62" containerName="model-chainer" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:13:43.803090 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:43.803032 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/switch-graph-03167-6b54457b8f-b6sgt"] Apr 23 18:13:43.803462 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:43.803347 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="04269895-41e9-4b40-a921-26abf7648545" containerName="kube-rbac-proxy" Apr 23 18:13:43.803462 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:43.803359 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="04269895-41e9-4b40-a921-26abf7648545" containerName="kube-rbac-proxy" Apr 23 18:13:43.803462 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:43.803372 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f760979b-5b29-4f7b-99b5-00538b24bd1d" containerName="kube-rbac-proxy" Apr 23 18:13:43.803462 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:43.803377 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="f760979b-5b29-4f7b-99b5-00538b24bd1d" containerName="kube-rbac-proxy" Apr 23 18:13:43.803462 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:43.803385 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f760979b-5b29-4f7b-99b5-00538b24bd1d" containerName="kserve-container" Apr 23 18:13:43.803462 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:43.803391 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="f760979b-5b29-4f7b-99b5-00538b24bd1d" containerName="kserve-container" Apr 23 18:13:43.803462 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:43.803403 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="04269895-41e9-4b40-a921-26abf7648545" containerName="storage-initializer" Apr 23 18:13:43.803462 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:43.803408 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="04269895-41e9-4b40-a921-26abf7648545" containerName="storage-initializer" Apr 23 18:13:43.803462 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:43.803414 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f760979b-5b29-4f7b-99b5-00538b24bd1d" containerName="storage-initializer" Apr 23 18:13:43.803462 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:43.803419 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="f760979b-5b29-4f7b-99b5-00538b24bd1d" containerName="storage-initializer" Apr 23 18:13:43.803462 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:43.803427 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="04269895-41e9-4b40-a921-26abf7648545" containerName="kserve-container" Apr 23 18:13:43.803462 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:43.803432 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="04269895-41e9-4b40-a921-26abf7648545" containerName="kserve-container" Apr 23 18:13:43.803803 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:43.803472 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="f760979b-5b29-4f7b-99b5-00538b24bd1d" containerName="kserve-container" Apr 23 18:13:43.803803 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:43.803480 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="f760979b-5b29-4f7b-99b5-00538b24bd1d" containerName="kube-rbac-proxy" Apr 23 18:13:43.803803 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:43.803488 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="04269895-41e9-4b40-a921-26abf7648545" containerName="kserve-container" Apr 23 18:13:43.803803 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:43.803494 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="04269895-41e9-4b40-a921-26abf7648545" containerName="kube-rbac-proxy" Apr 23 18:13:43.806357 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:43.806337 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-03167-6b54457b8f-b6sgt" Apr 23 18:13:43.809429 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:43.809408 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"switch-graph-03167-kube-rbac-proxy-sar-config\"" Apr 23 18:13:43.809563 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:43.809456 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"switch-graph-03167-serving-cert\"" Apr 23 18:13:43.814932 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:43.814898 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/switch-graph-03167-6b54457b8f-b6sgt"] Apr 23 18:13:43.903310 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:43.903279 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/06345e49-9db5-44f6-bf83-cbf0714e3aca-proxy-tls\") pod \"switch-graph-03167-6b54457b8f-b6sgt\" (UID: \"06345e49-9db5-44f6-bf83-cbf0714e3aca\") " pod="kserve-ci-e2e-test/switch-graph-03167-6b54457b8f-b6sgt" Apr 23 18:13:43.903484 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:43.903419 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06345e49-9db5-44f6-bf83-cbf0714e3aca-openshift-service-ca-bundle\") pod \"switch-graph-03167-6b54457b8f-b6sgt\" (UID: \"06345e49-9db5-44f6-bf83-cbf0714e3aca\") " pod="kserve-ci-e2e-test/switch-graph-03167-6b54457b8f-b6sgt" Apr 23 18:13:44.004777 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:44.004745 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06345e49-9db5-44f6-bf83-cbf0714e3aca-openshift-service-ca-bundle\") pod \"switch-graph-03167-6b54457b8f-b6sgt\" (UID: \"06345e49-9db5-44f6-bf83-cbf0714e3aca\") " pod="kserve-ci-e2e-test/switch-graph-03167-6b54457b8f-b6sgt" Apr 23 18:13:44.004777 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:44.004780 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/06345e49-9db5-44f6-bf83-cbf0714e3aca-proxy-tls\") pod \"switch-graph-03167-6b54457b8f-b6sgt\" (UID: \"06345e49-9db5-44f6-bf83-cbf0714e3aca\") " pod="kserve-ci-e2e-test/switch-graph-03167-6b54457b8f-b6sgt" Apr 23 18:13:44.004935 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:13:44.004896 2572 secret.go:189] Couldn't get secret kserve-ci-e2e-test/switch-graph-03167-serving-cert: secret "switch-graph-03167-serving-cert" not found Apr 23 18:13:44.004980 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:13:44.004959 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06345e49-9db5-44f6-bf83-cbf0714e3aca-proxy-tls podName:06345e49-9db5-44f6-bf83-cbf0714e3aca nodeName:}" failed. No retries permitted until 2026-04-23 18:13:44.504943021 +0000 UTC m=+931.803189135 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/06345e49-9db5-44f6-bf83-cbf0714e3aca-proxy-tls") pod "switch-graph-03167-6b54457b8f-b6sgt" (UID: "06345e49-9db5-44f6-bf83-cbf0714e3aca") : secret "switch-graph-03167-serving-cert" not found Apr 23 18:13:44.005412 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:44.005392 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06345e49-9db5-44f6-bf83-cbf0714e3aca-openshift-service-ca-bundle\") pod \"switch-graph-03167-6b54457b8f-b6sgt\" (UID: \"06345e49-9db5-44f6-bf83-cbf0714e3aca\") " pod="kserve-ci-e2e-test/switch-graph-03167-6b54457b8f-b6sgt" Apr 23 18:13:44.509810 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:44.509775 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/06345e49-9db5-44f6-bf83-cbf0714e3aca-proxy-tls\") pod \"switch-graph-03167-6b54457b8f-b6sgt\" (UID: \"06345e49-9db5-44f6-bf83-cbf0714e3aca\") " pod="kserve-ci-e2e-test/switch-graph-03167-6b54457b8f-b6sgt" Apr 23 18:13:44.512177 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:44.512144 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/06345e49-9db5-44f6-bf83-cbf0714e3aca-proxy-tls\") pod \"switch-graph-03167-6b54457b8f-b6sgt\" (UID: \"06345e49-9db5-44f6-bf83-cbf0714e3aca\") " pod="kserve-ci-e2e-test/switch-graph-03167-6b54457b8f-b6sgt" Apr 23 18:13:44.716882 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:44.716847 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-03167-6b54457b8f-b6sgt" Apr 23 18:13:44.833675 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:44.833651 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/switch-graph-03167-6b54457b8f-b6sgt"] Apr 23 18:13:44.835686 ip-10-0-130-162 kubenswrapper[2572]: W0423 18:13:44.835658 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06345e49_9db5_44f6_bf83_cbf0714e3aca.slice/crio-adbae1c8db190946f9cf53f1fb999f4f7cade18729271be454e1afac7db75874 WatchSource:0}: Error finding container adbae1c8db190946f9cf53f1fb999f4f7cade18729271be454e1afac7db75874: Status 404 returned error can't find the container with id adbae1c8db190946f9cf53f1fb999f4f7cade18729271be454e1afac7db75874 Apr 23 18:13:45.006537 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:45.006497 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-03167-6b54457b8f-b6sgt" event={"ID":"06345e49-9db5-44f6-bf83-cbf0714e3aca","Type":"ContainerStarted","Data":"c16184ab289104960d037cd6c6dbe72034058f24e2f7fb740a1c52c100dc97be"} Apr 23 18:13:45.006537 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:45.006535 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-03167-6b54457b8f-b6sgt" event={"ID":"06345e49-9db5-44f6-bf83-cbf0714e3aca","Type":"ContainerStarted","Data":"adbae1c8db190946f9cf53f1fb999f4f7cade18729271be454e1afac7db75874"} Apr 23 18:13:45.006727 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:45.006558 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/switch-graph-03167-6b54457b8f-b6sgt" Apr 23 18:13:45.025741 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:45.025643 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/switch-graph-03167-6b54457b8f-b6sgt" podStartSLOduration=2.025604741 podStartE2EDuration="2.025604741s" podCreationTimestamp="2026-04-23 18:13:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:13:45.023990397 +0000 UTC m=+932.322236530" watchObservedRunningTime="2026-04-23 18:13:45.025604741 +0000 UTC m=+932.323850871" Apr 23 18:13:46.897093 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:46.897052 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-8698b786b7-kbqbh" podUID="90caca3c-a394-4fc1-b5c1-0622d17c4e62" containerName="model-chainer" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:13:49.609055 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:49.609032 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-8698b786b7-kbqbh" Apr 23 18:13:49.747038 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:49.747005 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/90caca3c-a394-4fc1-b5c1-0622d17c4e62-proxy-tls\") pod \"90caca3c-a394-4fc1-b5c1-0622d17c4e62\" (UID: \"90caca3c-a394-4fc1-b5c1-0622d17c4e62\") " Apr 23 18:13:49.747190 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:49.747103 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90caca3c-a394-4fc1-b5c1-0622d17c4e62-openshift-service-ca-bundle\") pod \"90caca3c-a394-4fc1-b5c1-0622d17c4e62\" (UID: \"90caca3c-a394-4fc1-b5c1-0622d17c4e62\") " Apr 23 18:13:49.747448 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:49.747427 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90caca3c-a394-4fc1-b5c1-0622d17c4e62-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "90caca3c-a394-4fc1-b5c1-0622d17c4e62" (UID: "90caca3c-a394-4fc1-b5c1-0622d17c4e62"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:13:49.749052 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:49.749030 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90caca3c-a394-4fc1-b5c1-0622d17c4e62-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "90caca3c-a394-4fc1-b5c1-0622d17c4e62" (UID: "90caca3c-a394-4fc1-b5c1-0622d17c4e62"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:13:49.847822 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:49.847791 2572 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/90caca3c-a394-4fc1-b5c1-0622d17c4e62-proxy-tls\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:13:49.847822 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:49.847821 2572 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90caca3c-a394-4fc1-b5c1-0622d17c4e62-openshift-service-ca-bundle\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:13:50.023834 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:50.023749 2572 generic.go:358] "Generic (PLEG): container finished" podID="90caca3c-a394-4fc1-b5c1-0622d17c4e62" containerID="7976ac0bb733bc5ca2bf037ae146fa62553f148f80c9c1236c03dafa5ddd8ef2" exitCode=0 Apr 23 18:13:50.023834 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:50.023800 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-8698b786b7-kbqbh" event={"ID":"90caca3c-a394-4fc1-b5c1-0622d17c4e62","Type":"ContainerDied","Data":"7976ac0bb733bc5ca2bf037ae146fa62553f148f80c9c1236c03dafa5ddd8ef2"} Apr 23 18:13:50.023834 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:50.023818 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-8698b786b7-kbqbh" Apr 23 18:13:50.023834 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:50.023831 2572 scope.go:117] "RemoveContainer" containerID="7976ac0bb733bc5ca2bf037ae146fa62553f148f80c9c1236c03dafa5ddd8ef2" Apr 23 18:13:50.024132 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:50.023821 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-8698b786b7-kbqbh" event={"ID":"90caca3c-a394-4fc1-b5c1-0622d17c4e62","Type":"ContainerDied","Data":"bcfb64dc4bc82f3a957391a790a8489494764367fd0e1e9e7fd83f988eae40cb"} Apr 23 18:13:50.031789 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:50.031766 2572 scope.go:117] "RemoveContainer" containerID="7976ac0bb733bc5ca2bf037ae146fa62553f148f80c9c1236c03dafa5ddd8ef2" Apr 23 18:13:50.032056 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:13:50.032030 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7976ac0bb733bc5ca2bf037ae146fa62553f148f80c9c1236c03dafa5ddd8ef2\": container with ID starting with 7976ac0bb733bc5ca2bf037ae146fa62553f148f80c9c1236c03dafa5ddd8ef2 not found: ID does not exist" containerID="7976ac0bb733bc5ca2bf037ae146fa62553f148f80c9c1236c03dafa5ddd8ef2" Apr 23 18:13:50.032128 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:50.032069 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7976ac0bb733bc5ca2bf037ae146fa62553f148f80c9c1236c03dafa5ddd8ef2"} err="failed to get container status \"7976ac0bb733bc5ca2bf037ae146fa62553f148f80c9c1236c03dafa5ddd8ef2\": rpc error: code = NotFound desc = could not find container \"7976ac0bb733bc5ca2bf037ae146fa62553f148f80c9c1236c03dafa5ddd8ef2\": container with ID starting with 7976ac0bb733bc5ca2bf037ae146fa62553f148f80c9c1236c03dafa5ddd8ef2 not found: ID does not exist" Apr 23 18:13:50.047738 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:50.047708 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/model-chainer-8698b786b7-kbqbh"] Apr 23 18:13:50.051194 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:50.051170 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/model-chainer-8698b786b7-kbqbh"] Apr 23 18:13:51.019766 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:51.019738 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/switch-graph-03167-6b54457b8f-b6sgt" Apr 23 18:13:51.309157 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:13:51.309062 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90caca3c-a394-4fc1-b5c1-0622d17c4e62" path="/var/lib/kubelet/pods/90caca3c-a394-4fc1-b5c1-0622d17c4e62/volumes" Apr 23 18:14:29.717653 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:14:29.717581 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/sequence-graph-89c6b-5dcc6f7895-k8hq8"] Apr 23 18:14:29.718079 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:14:29.717859 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="90caca3c-a394-4fc1-b5c1-0622d17c4e62" containerName="model-chainer" Apr 23 18:14:29.718079 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:14:29.717870 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="90caca3c-a394-4fc1-b5c1-0622d17c4e62" containerName="model-chainer" Apr 23 18:14:29.718079 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:14:29.717917 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="90caca3c-a394-4fc1-b5c1-0622d17c4e62" containerName="model-chainer" Apr 23 18:14:29.720641 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:14:29.720619 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-89c6b-5dcc6f7895-k8hq8" Apr 23 18:14:29.723188 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:14:29.723169 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"sequence-graph-89c6b-serving-cert\"" Apr 23 18:14:29.723552 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:14:29.723535 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"sequence-graph-89c6b-kube-rbac-proxy-sar-config\"" Apr 23 18:14:29.740585 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:14:29.740560 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1a7bd89-64d1-419a-9886-182284561c3a-openshift-service-ca-bundle\") pod \"sequence-graph-89c6b-5dcc6f7895-k8hq8\" (UID: \"c1a7bd89-64d1-419a-9886-182284561c3a\") " pod="kserve-ci-e2e-test/sequence-graph-89c6b-5dcc6f7895-k8hq8" Apr 23 18:14:29.740666 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:14:29.740593 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c1a7bd89-64d1-419a-9886-182284561c3a-proxy-tls\") pod \"sequence-graph-89c6b-5dcc6f7895-k8hq8\" (UID: \"c1a7bd89-64d1-419a-9886-182284561c3a\") " pod="kserve-ci-e2e-test/sequence-graph-89c6b-5dcc6f7895-k8hq8" Apr 23 18:14:29.753121 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:14:29.753098 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-89c6b-5dcc6f7895-k8hq8"] Apr 23 18:14:29.841037 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:14:29.841000 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1a7bd89-64d1-419a-9886-182284561c3a-openshift-service-ca-bundle\") pod \"sequence-graph-89c6b-5dcc6f7895-k8hq8\" (UID: \"c1a7bd89-64d1-419a-9886-182284561c3a\") " pod="kserve-ci-e2e-test/sequence-graph-89c6b-5dcc6f7895-k8hq8" Apr 23 18:14:29.841185 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:14:29.841050 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c1a7bd89-64d1-419a-9886-182284561c3a-proxy-tls\") pod \"sequence-graph-89c6b-5dcc6f7895-k8hq8\" (UID: \"c1a7bd89-64d1-419a-9886-182284561c3a\") " pod="kserve-ci-e2e-test/sequence-graph-89c6b-5dcc6f7895-k8hq8" Apr 23 18:14:29.841185 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:14:29.841172 2572 secret.go:189] Couldn't get secret kserve-ci-e2e-test/sequence-graph-89c6b-serving-cert: secret "sequence-graph-89c6b-serving-cert" not found Apr 23 18:14:29.841259 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:14:29.841242 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1a7bd89-64d1-419a-9886-182284561c3a-proxy-tls podName:c1a7bd89-64d1-419a-9886-182284561c3a nodeName:}" failed. No retries permitted until 2026-04-23 18:14:30.341225546 +0000 UTC m=+977.639471657 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c1a7bd89-64d1-419a-9886-182284561c3a-proxy-tls") pod "sequence-graph-89c6b-5dcc6f7895-k8hq8" (UID: "c1a7bd89-64d1-419a-9886-182284561c3a") : secret "sequence-graph-89c6b-serving-cert" not found Apr 23 18:14:29.841625 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:14:29.841604 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1a7bd89-64d1-419a-9886-182284561c3a-openshift-service-ca-bundle\") pod \"sequence-graph-89c6b-5dcc6f7895-k8hq8\" (UID: \"c1a7bd89-64d1-419a-9886-182284561c3a\") " pod="kserve-ci-e2e-test/sequence-graph-89c6b-5dcc6f7895-k8hq8" Apr 23 18:14:30.345381 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:14:30.345316 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c1a7bd89-64d1-419a-9886-182284561c3a-proxy-tls\") pod \"sequence-graph-89c6b-5dcc6f7895-k8hq8\" (UID: \"c1a7bd89-64d1-419a-9886-182284561c3a\") " pod="kserve-ci-e2e-test/sequence-graph-89c6b-5dcc6f7895-k8hq8" Apr 23 18:14:30.347790 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:14:30.347755 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c1a7bd89-64d1-419a-9886-182284561c3a-proxy-tls\") pod \"sequence-graph-89c6b-5dcc6f7895-k8hq8\" (UID: \"c1a7bd89-64d1-419a-9886-182284561c3a\") " pod="kserve-ci-e2e-test/sequence-graph-89c6b-5dcc6f7895-k8hq8" Apr 23 18:14:30.631027 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:14:30.630943 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-89c6b-5dcc6f7895-k8hq8" Apr 23 18:14:30.746916 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:14:30.746891 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-89c6b-5dcc6f7895-k8hq8"] Apr 23 18:14:30.749251 ip-10-0-130-162 kubenswrapper[2572]: W0423 18:14:30.749222 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1a7bd89_64d1_419a_9886_182284561c3a.slice/crio-1c15b3642ca20d5de5b066accea37d1fd8a9cf1c5167802f12412de2648a3819 WatchSource:0}: Error finding container 1c15b3642ca20d5de5b066accea37d1fd8a9cf1c5167802f12412de2648a3819: Status 404 returned error can't find the container with id 1c15b3642ca20d5de5b066accea37d1fd8a9cf1c5167802f12412de2648a3819 Apr 23 18:14:31.141810 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:14:31.141769 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-89c6b-5dcc6f7895-k8hq8" event={"ID":"c1a7bd89-64d1-419a-9886-182284561c3a","Type":"ContainerStarted","Data":"e46f9c5e4aa61585b8013ff50dc9aac0ea14656021715d3dd0dd0a59d9751d04"} Apr 23 18:14:31.141810 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:14:31.141813 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-89c6b-5dcc6f7895-k8hq8" event={"ID":"c1a7bd89-64d1-419a-9886-182284561c3a","Type":"ContainerStarted","Data":"1c15b3642ca20d5de5b066accea37d1fd8a9cf1c5167802f12412de2648a3819"} Apr 23 18:14:31.142016 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:14:31.141856 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/sequence-graph-89c6b-5dcc6f7895-k8hq8" Apr 23 18:14:31.160573 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:14:31.160529 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/sequence-graph-89c6b-5dcc6f7895-k8hq8" podStartSLOduration=2.160514807 podStartE2EDuration="2.160514807s" podCreationTimestamp="2026-04-23 18:14:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:14:31.159864628 +0000 UTC m=+978.458110760" watchObservedRunningTime="2026-04-23 18:14:31.160514807 +0000 UTC m=+978.458760940" Apr 23 18:14:37.150652 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:14:37.150620 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/sequence-graph-89c6b-5dcc6f7895-k8hq8" Apr 23 18:21:58.500946 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:21:58.500872 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/switch-graph-03167-6b54457b8f-b6sgt"] Apr 23 18:21:58.503311 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:21:58.501153 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/switch-graph-03167-6b54457b8f-b6sgt" podUID="06345e49-9db5-44f6-bf83-cbf0714e3aca" containerName="switch-graph-03167" containerID="cri-o://c16184ab289104960d037cd6c6dbe72034058f24e2f7fb740a1c52c100dc97be" gracePeriod=30 Apr 23 18:22:01.018266 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:01.018219 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-03167-6b54457b8f-b6sgt" podUID="06345e49-9db5-44f6-bf83-cbf0714e3aca" containerName="switch-graph-03167" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:22:06.018616 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:06.018576 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-03167-6b54457b8f-b6sgt" podUID="06345e49-9db5-44f6-bf83-cbf0714e3aca" containerName="switch-graph-03167" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:22:11.018922 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:11.018885 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-03167-6b54457b8f-b6sgt" podUID="06345e49-9db5-44f6-bf83-cbf0714e3aca" containerName="switch-graph-03167" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:22:11.019304 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:11.018985 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/switch-graph-03167-6b54457b8f-b6sgt" Apr 23 18:22:16.018293 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:16.018251 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-03167-6b54457b8f-b6sgt" podUID="06345e49-9db5-44f6-bf83-cbf0714e3aca" containerName="switch-graph-03167" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:22:21.018262 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:21.018208 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-03167-6b54457b8f-b6sgt" podUID="06345e49-9db5-44f6-bf83-cbf0714e3aca" containerName="switch-graph-03167" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:22:26.018072 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:26.018029 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-03167-6b54457b8f-b6sgt" podUID="06345e49-9db5-44f6-bf83-cbf0714e3aca" containerName="switch-graph-03167" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:22:28.642862 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:28.642840 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-03167-6b54457b8f-b6sgt" Apr 23 18:22:28.695505 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:28.695475 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06345e49-9db5-44f6-bf83-cbf0714e3aca-openshift-service-ca-bundle\") pod \"06345e49-9db5-44f6-bf83-cbf0714e3aca\" (UID: \"06345e49-9db5-44f6-bf83-cbf0714e3aca\") " Apr 23 18:22:28.695629 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:28.695548 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/06345e49-9db5-44f6-bf83-cbf0714e3aca-proxy-tls\") pod \"06345e49-9db5-44f6-bf83-cbf0714e3aca\" (UID: \"06345e49-9db5-44f6-bf83-cbf0714e3aca\") " Apr 23 18:22:28.695841 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:28.695816 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06345e49-9db5-44f6-bf83-cbf0714e3aca-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "06345e49-9db5-44f6-bf83-cbf0714e3aca" (UID: "06345e49-9db5-44f6-bf83-cbf0714e3aca"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:22:28.697461 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:28.697438 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06345e49-9db5-44f6-bf83-cbf0714e3aca-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "06345e49-9db5-44f6-bf83-cbf0714e3aca" (UID: "06345e49-9db5-44f6-bf83-cbf0714e3aca"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:22:28.796636 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:28.796564 2572 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06345e49-9db5-44f6-bf83-cbf0714e3aca-openshift-service-ca-bundle\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:22:28.796636 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:28.796591 2572 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/06345e49-9db5-44f6-bf83-cbf0714e3aca-proxy-tls\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:22:29.464192 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:29.464158 2572 generic.go:358] "Generic (PLEG): container finished" podID="06345e49-9db5-44f6-bf83-cbf0714e3aca" containerID="c16184ab289104960d037cd6c6dbe72034058f24e2f7fb740a1c52c100dc97be" exitCode=0 Apr 23 18:22:29.464443 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:29.464209 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-03167-6b54457b8f-b6sgt" Apr 23 18:22:29.464443 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:29.464247 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-03167-6b54457b8f-b6sgt" event={"ID":"06345e49-9db5-44f6-bf83-cbf0714e3aca","Type":"ContainerDied","Data":"c16184ab289104960d037cd6c6dbe72034058f24e2f7fb740a1c52c100dc97be"} Apr 23 18:22:29.464443 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:29.464289 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-03167-6b54457b8f-b6sgt" event={"ID":"06345e49-9db5-44f6-bf83-cbf0714e3aca","Type":"ContainerDied","Data":"adbae1c8db190946f9cf53f1fb999f4f7cade18729271be454e1afac7db75874"} Apr 23 18:22:29.464443 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:29.464311 2572 scope.go:117] "RemoveContainer" containerID="c16184ab289104960d037cd6c6dbe72034058f24e2f7fb740a1c52c100dc97be" Apr 23 18:22:29.471663 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:29.471641 2572 scope.go:117] "RemoveContainer" containerID="c16184ab289104960d037cd6c6dbe72034058f24e2f7fb740a1c52c100dc97be" Apr 23 18:22:29.471894 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:22:29.471877 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c16184ab289104960d037cd6c6dbe72034058f24e2f7fb740a1c52c100dc97be\": container with ID starting with c16184ab289104960d037cd6c6dbe72034058f24e2f7fb740a1c52c100dc97be not found: ID does not exist" containerID="c16184ab289104960d037cd6c6dbe72034058f24e2f7fb740a1c52c100dc97be" Apr 23 18:22:29.471937 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:29.471902 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c16184ab289104960d037cd6c6dbe72034058f24e2f7fb740a1c52c100dc97be"} err="failed to get container status \"c16184ab289104960d037cd6c6dbe72034058f24e2f7fb740a1c52c100dc97be\": rpc error: code = NotFound desc = could not find container \"c16184ab289104960d037cd6c6dbe72034058f24e2f7fb740a1c52c100dc97be\": container with ID starting with c16184ab289104960d037cd6c6dbe72034058f24e2f7fb740a1c52c100dc97be not found: ID does not exist" Apr 23 18:22:29.484525 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:29.484505 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/switch-graph-03167-6b54457b8f-b6sgt"] Apr 23 18:22:29.490271 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:29.490249 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/switch-graph-03167-6b54457b8f-b6sgt"] Apr 23 18:22:31.307519 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:31.307486 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06345e49-9db5-44f6-bf83-cbf0714e3aca" path="/var/lib/kubelet/pods/06345e49-9db5-44f6-bf83-cbf0714e3aca/volumes" Apr 23 18:22:44.397966 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:44.397933 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-89c6b-5dcc6f7895-k8hq8"] Apr 23 18:22:44.398315 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:44.398183 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/sequence-graph-89c6b-5dcc6f7895-k8hq8" podUID="c1a7bd89-64d1-419a-9886-182284561c3a" containerName="sequence-graph-89c6b" containerID="cri-o://e46f9c5e4aa61585b8013ff50dc9aac0ea14656021715d3dd0dd0a59d9751d04" gracePeriod=30 Apr 23 18:22:47.148589 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:47.148546 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-89c6b-5dcc6f7895-k8hq8" podUID="c1a7bd89-64d1-419a-9886-182284561c3a" containerName="sequence-graph-89c6b" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:22:52.148599 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:52.148554 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-89c6b-5dcc6f7895-k8hq8" podUID="c1a7bd89-64d1-419a-9886-182284561c3a" containerName="sequence-graph-89c6b" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:22:57.148529 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:57.148482 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-89c6b-5dcc6f7895-k8hq8" podUID="c1a7bd89-64d1-419a-9886-182284561c3a" containerName="sequence-graph-89c6b" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:22:57.148899 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:57.148600 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/sequence-graph-89c6b-5dcc6f7895-k8hq8" Apr 23 18:22:58.731479 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:58.729428 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-5d865-8ddbb586f-r2pd9"] Apr 23 18:22:58.731479 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:58.730059 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="06345e49-9db5-44f6-bf83-cbf0714e3aca" containerName="switch-graph-03167" Apr 23 18:22:58.731479 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:58.730076 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="06345e49-9db5-44f6-bf83-cbf0714e3aca" containerName="switch-graph-03167" Apr 23 18:22:58.731479 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:58.730183 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="06345e49-9db5-44f6-bf83-cbf0714e3aca" containerName="switch-graph-03167" Apr 23 18:22:58.733720 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:58.733686 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-5d865-8ddbb586f-r2pd9" Apr 23 18:22:58.736419 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:58.736383 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"ensemble-graph-5d865-kube-rbac-proxy-sar-config\"" Apr 23 18:22:58.736552 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:58.736426 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"ensemble-graph-5d865-serving-cert\"" Apr 23 18:22:58.738200 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:58.738173 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-5d865-8ddbb586f-r2pd9"] Apr 23 18:22:58.806562 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:58.806538 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a97bb6ee-8a64-4279-bb30-70f083352523-openshift-service-ca-bundle\") pod \"ensemble-graph-5d865-8ddbb586f-r2pd9\" (UID: \"a97bb6ee-8a64-4279-bb30-70f083352523\") " pod="kserve-ci-e2e-test/ensemble-graph-5d865-8ddbb586f-r2pd9" Apr 23 18:22:58.806664 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:58.806570 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a97bb6ee-8a64-4279-bb30-70f083352523-proxy-tls\") pod \"ensemble-graph-5d865-8ddbb586f-r2pd9\" (UID: \"a97bb6ee-8a64-4279-bb30-70f083352523\") " pod="kserve-ci-e2e-test/ensemble-graph-5d865-8ddbb586f-r2pd9" Apr 23 18:22:58.907811 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:58.907785 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a97bb6ee-8a64-4279-bb30-70f083352523-openshift-service-ca-bundle\") pod \"ensemble-graph-5d865-8ddbb586f-r2pd9\" (UID: \"a97bb6ee-8a64-4279-bb30-70f083352523\") " pod="kserve-ci-e2e-test/ensemble-graph-5d865-8ddbb586f-r2pd9" Apr 23 18:22:58.907926 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:58.907818 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a97bb6ee-8a64-4279-bb30-70f083352523-proxy-tls\") pod \"ensemble-graph-5d865-8ddbb586f-r2pd9\" (UID: \"a97bb6ee-8a64-4279-bb30-70f083352523\") " pod="kserve-ci-e2e-test/ensemble-graph-5d865-8ddbb586f-r2pd9" Apr 23 18:22:58.907980 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:22:58.907925 2572 secret.go:189] Couldn't get secret kserve-ci-e2e-test/ensemble-graph-5d865-serving-cert: secret "ensemble-graph-5d865-serving-cert" not found Apr 23 18:22:58.907980 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:22:58.907977 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a97bb6ee-8a64-4279-bb30-70f083352523-proxy-tls podName:a97bb6ee-8a64-4279-bb30-70f083352523 nodeName:}" failed. No retries permitted until 2026-04-23 18:22:59.407961302 +0000 UTC m=+1486.706207414 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/a97bb6ee-8a64-4279-bb30-70f083352523-proxy-tls") pod "ensemble-graph-5d865-8ddbb586f-r2pd9" (UID: "a97bb6ee-8a64-4279-bb30-70f083352523") : secret "ensemble-graph-5d865-serving-cert" not found Apr 23 18:22:58.908427 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:58.908407 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a97bb6ee-8a64-4279-bb30-70f083352523-openshift-service-ca-bundle\") pod \"ensemble-graph-5d865-8ddbb586f-r2pd9\" (UID: \"a97bb6ee-8a64-4279-bb30-70f083352523\") " pod="kserve-ci-e2e-test/ensemble-graph-5d865-8ddbb586f-r2pd9" Apr 23 18:22:59.411548 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:59.411510 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a97bb6ee-8a64-4279-bb30-70f083352523-proxy-tls\") pod \"ensemble-graph-5d865-8ddbb586f-r2pd9\" (UID: \"a97bb6ee-8a64-4279-bb30-70f083352523\") " pod="kserve-ci-e2e-test/ensemble-graph-5d865-8ddbb586f-r2pd9" Apr 23 18:22:59.413845 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:59.413824 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a97bb6ee-8a64-4279-bb30-70f083352523-proxy-tls\") pod \"ensemble-graph-5d865-8ddbb586f-r2pd9\" (UID: \"a97bb6ee-8a64-4279-bb30-70f083352523\") " pod="kserve-ci-e2e-test/ensemble-graph-5d865-8ddbb586f-r2pd9" Apr 23 18:22:59.644360 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:59.644303 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-5d865-8ddbb586f-r2pd9" Apr 23 18:22:59.759083 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:59.759060 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-5d865-8ddbb586f-r2pd9"] Apr 23 18:22:59.761492 ip-10-0-130-162 kubenswrapper[2572]: W0423 18:22:59.761464 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda97bb6ee_8a64_4279_bb30_70f083352523.slice/crio-0e3f7b2cd0ad68f08fc823aac1e40ff0083bee140e9acd7603d4599d0c97ae25 WatchSource:0}: Error finding container 0e3f7b2cd0ad68f08fc823aac1e40ff0083bee140e9acd7603d4599d0c97ae25: Status 404 returned error can't find the container with id 0e3f7b2cd0ad68f08fc823aac1e40ff0083bee140e9acd7603d4599d0c97ae25 Apr 23 18:22:59.763268 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:22:59.763247 2572 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 18:23:00.549959 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:00.549919 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-5d865-8ddbb586f-r2pd9" event={"ID":"a97bb6ee-8a64-4279-bb30-70f083352523","Type":"ContainerStarted","Data":"61f03980d21097d29b9bc81a249e069581886a4efc268f2ea040901699ad29f2"} Apr 23 18:23:00.549959 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:00.549963 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-5d865-8ddbb586f-r2pd9" event={"ID":"a97bb6ee-8a64-4279-bb30-70f083352523","Type":"ContainerStarted","Data":"0e3f7b2cd0ad68f08fc823aac1e40ff0083bee140e9acd7603d4599d0c97ae25"} Apr 23 18:23:00.550368 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:00.550038 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/ensemble-graph-5d865-8ddbb586f-r2pd9" Apr 23 18:23:00.568030 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:00.567984 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/ensemble-graph-5d865-8ddbb586f-r2pd9" podStartSLOduration=2.5679721129999997 podStartE2EDuration="2.567972113s" podCreationTimestamp="2026-04-23 18:22:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:23:00.56573636 +0000 UTC m=+1487.863982493" watchObservedRunningTime="2026-04-23 18:23:00.567972113 +0000 UTC m=+1487.866218245" Apr 23 18:23:02.149087 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:02.149047 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-89c6b-5dcc6f7895-k8hq8" podUID="c1a7bd89-64d1-419a-9886-182284561c3a" containerName="sequence-graph-89c6b" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:23:06.558774 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:06.558742 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/ensemble-graph-5d865-8ddbb586f-r2pd9" Apr 23 18:23:07.148432 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:07.148393 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-89c6b-5dcc6f7895-k8hq8" podUID="c1a7bd89-64d1-419a-9886-182284561c3a" containerName="sequence-graph-89c6b" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:23:08.800824 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:08.800788 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-5d865-8ddbb586f-r2pd9"] Apr 23 18:23:08.801218 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:08.801013 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/ensemble-graph-5d865-8ddbb586f-r2pd9" podUID="a97bb6ee-8a64-4279-bb30-70f083352523" containerName="ensemble-graph-5d865" containerID="cri-o://61f03980d21097d29b9bc81a249e069581886a4efc268f2ea040901699ad29f2" gracePeriod=30 Apr 23 18:23:11.557549 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:11.557504 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-5d865-8ddbb586f-r2pd9" podUID="a97bb6ee-8a64-4279-bb30-70f083352523" containerName="ensemble-graph-5d865" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:23:12.148911 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:12.148871 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-89c6b-5dcc6f7895-k8hq8" podUID="c1a7bd89-64d1-419a-9886-182284561c3a" containerName="sequence-graph-89c6b" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:23:14.593024 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:14.592984 2572 generic.go:358] "Generic (PLEG): container finished" podID="c1a7bd89-64d1-419a-9886-182284561c3a" containerID="e46f9c5e4aa61585b8013ff50dc9aac0ea14656021715d3dd0dd0a59d9751d04" exitCode=0 Apr 23 18:23:14.593400 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:14.593036 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-89c6b-5dcc6f7895-k8hq8" event={"ID":"c1a7bd89-64d1-419a-9886-182284561c3a","Type":"ContainerDied","Data":"e46f9c5e4aa61585b8013ff50dc9aac0ea14656021715d3dd0dd0a59d9751d04"} Apr 23 18:23:15.038219 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:15.038198 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-89c6b-5dcc6f7895-k8hq8" Apr 23 18:23:15.125713 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:15.125683 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1a7bd89-64d1-419a-9886-182284561c3a-openshift-service-ca-bundle\") pod \"c1a7bd89-64d1-419a-9886-182284561c3a\" (UID: \"c1a7bd89-64d1-419a-9886-182284561c3a\") " Apr 23 18:23:15.125845 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:15.125754 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c1a7bd89-64d1-419a-9886-182284561c3a-proxy-tls\") pod \"c1a7bd89-64d1-419a-9886-182284561c3a\" (UID: \"c1a7bd89-64d1-419a-9886-182284561c3a\") " Apr 23 18:23:15.126048 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:15.126014 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1a7bd89-64d1-419a-9886-182284561c3a-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "c1a7bd89-64d1-419a-9886-182284561c3a" (UID: "c1a7bd89-64d1-419a-9886-182284561c3a"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:23:15.127731 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:15.127700 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1a7bd89-64d1-419a-9886-182284561c3a-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "c1a7bd89-64d1-419a-9886-182284561c3a" (UID: "c1a7bd89-64d1-419a-9886-182284561c3a"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:23:15.226818 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:15.226724 2572 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1a7bd89-64d1-419a-9886-182284561c3a-openshift-service-ca-bundle\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:23:15.226818 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:15.226775 2572 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c1a7bd89-64d1-419a-9886-182284561c3a-proxy-tls\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:23:15.597256 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:15.597219 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-89c6b-5dcc6f7895-k8hq8" event={"ID":"c1a7bd89-64d1-419a-9886-182284561c3a","Type":"ContainerDied","Data":"1c15b3642ca20d5de5b066accea37d1fd8a9cf1c5167802f12412de2648a3819"} Apr 23 18:23:15.597698 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:15.597268 2572 scope.go:117] "RemoveContainer" containerID="e46f9c5e4aa61585b8013ff50dc9aac0ea14656021715d3dd0dd0a59d9751d04" Apr 23 18:23:15.597698 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:15.597234 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-89c6b-5dcc6f7895-k8hq8" Apr 23 18:23:15.615614 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:15.615591 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-89c6b-5dcc6f7895-k8hq8"] Apr 23 18:23:15.618937 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:15.618911 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-89c6b-5dcc6f7895-k8hq8"] Apr 23 18:23:16.557517 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:16.557475 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-5d865-8ddbb586f-r2pd9" podUID="a97bb6ee-8a64-4279-bb30-70f083352523" containerName="ensemble-graph-5d865" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:23:17.307555 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:17.307518 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1a7bd89-64d1-419a-9886-182284561c3a" path="/var/lib/kubelet/pods/c1a7bd89-64d1-419a-9886-182284561c3a/volumes" Apr 23 18:23:21.557751 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:21.557704 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-5d865-8ddbb586f-r2pd9" podUID="a97bb6ee-8a64-4279-bb30-70f083352523" containerName="ensemble-graph-5d865" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:23:21.558155 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:21.557820 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/ensemble-graph-5d865-8ddbb586f-r2pd9" Apr 23 18:23:26.557654 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:26.557574 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-5d865-8ddbb586f-r2pd9" podUID="a97bb6ee-8a64-4279-bb30-70f083352523" containerName="ensemble-graph-5d865" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:23:30.798538 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:23:30.798493 2572 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/fbbc82af82f7be7060e8d6a7931dd3dfe3dfc9311f4a364c6b41d9fd9ec028ea/diff" to get inode usage: stat /var/lib/containers/storage/overlay/fbbc82af82f7be7060e8d6a7931dd3dfe3dfc9311f4a364c6b41d9fd9ec028ea/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kserve-ci-e2e-test_sequence-graph-89c6b-5dcc6f7895-k8hq8_c1a7bd89-64d1-419a-9886-182284561c3a/sequence-graph-89c6b/0.log" to get inode usage: stat /var/log/pods/kserve-ci-e2e-test_sequence-graph-89c6b-5dcc6f7895-k8hq8_c1a7bd89-64d1-419a-9886-182284561c3a/sequence-graph-89c6b/0.log: no such file or directory Apr 23 18:23:31.556977 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:31.556937 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-5d865-8ddbb586f-r2pd9" podUID="a97bb6ee-8a64-4279-bb30-70f083352523" containerName="ensemble-graph-5d865" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:23:36.557896 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:36.557850 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-5d865-8ddbb586f-r2pd9" podUID="a97bb6ee-8a64-4279-bb30-70f083352523" containerName="ensemble-graph-5d865" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:23:38.826974 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:23:38.826919 2572 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda97bb6ee_8a64_4279_bb30_70f083352523.slice/crio-conmon-61f03980d21097d29b9bc81a249e069581886a4efc268f2ea040901699ad29f2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda97bb6ee_8a64_4279_bb30_70f083352523.slice/crio-61f03980d21097d29b9bc81a249e069581886a4efc268f2ea040901699ad29f2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1a7bd89_64d1_419a_9886_182284561c3a.slice/crio-conmon-e46f9c5e4aa61585b8013ff50dc9aac0ea14656021715d3dd0dd0a59d9751d04.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1a7bd89_64d1_419a_9886_182284561c3a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1a7bd89_64d1_419a_9886_182284561c3a.slice/crio-1c15b3642ca20d5de5b066accea37d1fd8a9cf1c5167802f12412de2648a3819\": RecentStats: unable to find data in memory cache]" Apr 23 18:23:38.827397 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:23:38.826975 2572 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda97bb6ee_8a64_4279_bb30_70f083352523.slice/crio-conmon-61f03980d21097d29b9bc81a249e069581886a4efc268f2ea040901699ad29f2.scope\": RecentStats: unable to find data in memory cache]" Apr 23 18:23:38.827397 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:23:38.826974 2572 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1a7bd89_64d1_419a_9886_182284561c3a.slice/crio-conmon-e46f9c5e4aa61585b8013ff50dc9aac0ea14656021715d3dd0dd0a59d9751d04.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1a7bd89_64d1_419a_9886_182284561c3a.slice/crio-1c15b3642ca20d5de5b066accea37d1fd8a9cf1c5167802f12412de2648a3819\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1a7bd89_64d1_419a_9886_182284561c3a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda97bb6ee_8a64_4279_bb30_70f083352523.slice/crio-61f03980d21097d29b9bc81a249e069581886a4efc268f2ea040901699ad29f2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda97bb6ee_8a64_4279_bb30_70f083352523.slice/crio-conmon-61f03980d21097d29b9bc81a249e069581886a4efc268f2ea040901699ad29f2.scope\": RecentStats: unable to find data in memory cache]" Apr 23 18:23:38.956573 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:38.956547 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-5d865-8ddbb586f-r2pd9" Apr 23 18:23:38.989093 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:38.989067 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a97bb6ee-8a64-4279-bb30-70f083352523-proxy-tls\") pod \"a97bb6ee-8a64-4279-bb30-70f083352523\" (UID: \"a97bb6ee-8a64-4279-bb30-70f083352523\") " Apr 23 18:23:38.989232 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:38.989131 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a97bb6ee-8a64-4279-bb30-70f083352523-openshift-service-ca-bundle\") pod \"a97bb6ee-8a64-4279-bb30-70f083352523\" (UID: \"a97bb6ee-8a64-4279-bb30-70f083352523\") " Apr 23 18:23:38.989499 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:38.989476 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a97bb6ee-8a64-4279-bb30-70f083352523-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "a97bb6ee-8a64-4279-bb30-70f083352523" (UID: "a97bb6ee-8a64-4279-bb30-70f083352523"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:23:38.991156 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:38.991132 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a97bb6ee-8a64-4279-bb30-70f083352523-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "a97bb6ee-8a64-4279-bb30-70f083352523" (UID: "a97bb6ee-8a64-4279-bb30-70f083352523"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:23:39.090368 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:39.090267 2572 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a97bb6ee-8a64-4279-bb30-70f083352523-proxy-tls\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:23:39.090368 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:39.090298 2572 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a97bb6ee-8a64-4279-bb30-70f083352523-openshift-service-ca-bundle\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:23:39.673032 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:39.672994 2572 generic.go:358] "Generic (PLEG): container finished" podID="a97bb6ee-8a64-4279-bb30-70f083352523" containerID="61f03980d21097d29b9bc81a249e069581886a4efc268f2ea040901699ad29f2" exitCode=0 Apr 23 18:23:39.673213 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:39.673063 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-5d865-8ddbb586f-r2pd9" Apr 23 18:23:39.673213 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:39.673058 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-5d865-8ddbb586f-r2pd9" event={"ID":"a97bb6ee-8a64-4279-bb30-70f083352523","Type":"ContainerDied","Data":"61f03980d21097d29b9bc81a249e069581886a4efc268f2ea040901699ad29f2"} Apr 23 18:23:39.673213 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:39.673104 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-5d865-8ddbb586f-r2pd9" event={"ID":"a97bb6ee-8a64-4279-bb30-70f083352523","Type":"ContainerDied","Data":"0e3f7b2cd0ad68f08fc823aac1e40ff0083bee140e9acd7603d4599d0c97ae25"} Apr 23 18:23:39.673213 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:39.673123 2572 scope.go:117] "RemoveContainer" containerID="61f03980d21097d29b9bc81a249e069581886a4efc268f2ea040901699ad29f2" Apr 23 18:23:39.680965 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:39.680948 2572 scope.go:117] "RemoveContainer" containerID="61f03980d21097d29b9bc81a249e069581886a4efc268f2ea040901699ad29f2" Apr 23 18:23:39.681205 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:23:39.681187 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61f03980d21097d29b9bc81a249e069581886a4efc268f2ea040901699ad29f2\": container with ID starting with 61f03980d21097d29b9bc81a249e069581886a4efc268f2ea040901699ad29f2 not found: ID does not exist" containerID="61f03980d21097d29b9bc81a249e069581886a4efc268f2ea040901699ad29f2" Apr 23 18:23:39.681250 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:39.681214 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61f03980d21097d29b9bc81a249e069581886a4efc268f2ea040901699ad29f2"} err="failed to get container status \"61f03980d21097d29b9bc81a249e069581886a4efc268f2ea040901699ad29f2\": rpc error: code = NotFound desc = could not find container \"61f03980d21097d29b9bc81a249e069581886a4efc268f2ea040901699ad29f2\": container with ID starting with 61f03980d21097d29b9bc81a249e069581886a4efc268f2ea040901699ad29f2 not found: ID does not exist" Apr 23 18:23:39.689680 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:39.689656 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-5d865-8ddbb586f-r2pd9"] Apr 23 18:23:39.694862 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:39.694842 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-5d865-8ddbb586f-r2pd9"] Apr 23 18:23:41.307703 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:41.307668 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a97bb6ee-8a64-4279-bb30-70f083352523" path="/var/lib/kubelet/pods/a97bb6ee-8a64-4279-bb30-70f083352523/volumes" Apr 23 18:23:54.562779 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:54.562740 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/sequence-graph-89ff7-6f7fc5787f-7bn8w"] Apr 23 18:23:54.563137 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:54.563004 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c1a7bd89-64d1-419a-9886-182284561c3a" containerName="sequence-graph-89c6b" Apr 23 18:23:54.563137 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:54.563015 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1a7bd89-64d1-419a-9886-182284561c3a" containerName="sequence-graph-89c6b" Apr 23 18:23:54.563137 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:54.563034 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a97bb6ee-8a64-4279-bb30-70f083352523" containerName="ensemble-graph-5d865" Apr 23 18:23:54.563137 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:54.563041 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="a97bb6ee-8a64-4279-bb30-70f083352523" containerName="ensemble-graph-5d865" Apr 23 18:23:54.563137 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:54.563077 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="c1a7bd89-64d1-419a-9886-182284561c3a" containerName="sequence-graph-89c6b" Apr 23 18:23:54.563137 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:54.563085 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="a97bb6ee-8a64-4279-bb30-70f083352523" containerName="ensemble-graph-5d865" Apr 23 18:23:54.567226 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:54.567205 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-89ff7-6f7fc5787f-7bn8w" Apr 23 18:23:54.569951 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:54.569927 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"sequence-graph-89ff7-kube-rbac-proxy-sar-config\"" Apr 23 18:23:54.570052 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:54.569996 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"default-dockercfg-b65vr\"" Apr 23 18:23:54.570175 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:54.570062 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"sequence-graph-89ff7-serving-cert\"" Apr 23 18:23:54.570175 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:54.570085 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"openshift-service-ca.crt\"" Apr 23 18:23:54.572708 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:54.572685 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-89ff7-6f7fc5787f-7bn8w"] Apr 23 18:23:54.595574 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:54.595542 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5a67a37-ede8-4317-806e-a08a8901a495-openshift-service-ca-bundle\") pod \"sequence-graph-89ff7-6f7fc5787f-7bn8w\" (UID: \"e5a67a37-ede8-4317-806e-a08a8901a495\") " pod="kserve-ci-e2e-test/sequence-graph-89ff7-6f7fc5787f-7bn8w" Apr 23 18:23:54.595673 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:54.595576 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e5a67a37-ede8-4317-806e-a08a8901a495-proxy-tls\") pod \"sequence-graph-89ff7-6f7fc5787f-7bn8w\" (UID: \"e5a67a37-ede8-4317-806e-a08a8901a495\") " pod="kserve-ci-e2e-test/sequence-graph-89ff7-6f7fc5787f-7bn8w" Apr 23 18:23:54.696438 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:54.696410 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5a67a37-ede8-4317-806e-a08a8901a495-openshift-service-ca-bundle\") pod \"sequence-graph-89ff7-6f7fc5787f-7bn8w\" (UID: \"e5a67a37-ede8-4317-806e-a08a8901a495\") " pod="kserve-ci-e2e-test/sequence-graph-89ff7-6f7fc5787f-7bn8w" Apr 23 18:23:54.696598 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:54.696450 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e5a67a37-ede8-4317-806e-a08a8901a495-proxy-tls\") pod \"sequence-graph-89ff7-6f7fc5787f-7bn8w\" (UID: \"e5a67a37-ede8-4317-806e-a08a8901a495\") " pod="kserve-ci-e2e-test/sequence-graph-89ff7-6f7fc5787f-7bn8w" Apr 23 18:23:54.697111 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:54.697083 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5a67a37-ede8-4317-806e-a08a8901a495-openshift-service-ca-bundle\") pod \"sequence-graph-89ff7-6f7fc5787f-7bn8w\" (UID: \"e5a67a37-ede8-4317-806e-a08a8901a495\") " pod="kserve-ci-e2e-test/sequence-graph-89ff7-6f7fc5787f-7bn8w" Apr 23 18:23:54.698748 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:54.698728 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e5a67a37-ede8-4317-806e-a08a8901a495-proxy-tls\") pod \"sequence-graph-89ff7-6f7fc5787f-7bn8w\" (UID: \"e5a67a37-ede8-4317-806e-a08a8901a495\") " pod="kserve-ci-e2e-test/sequence-graph-89ff7-6f7fc5787f-7bn8w" Apr 23 18:23:54.877940 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:54.877858 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-89ff7-6f7fc5787f-7bn8w" Apr 23 18:23:54.999345 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:54.999300 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-89ff7-6f7fc5787f-7bn8w"] Apr 23 18:23:55.001849 ip-10-0-130-162 kubenswrapper[2572]: W0423 18:23:55.001819 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5a67a37_ede8_4317_806e_a08a8901a495.slice/crio-21f7e14314898ebfb30a26e8fdec87c8b7d95898a458715885e8a5e3a19583ed WatchSource:0}: Error finding container 21f7e14314898ebfb30a26e8fdec87c8b7d95898a458715885e8a5e3a19583ed: Status 404 returned error can't find the container with id 21f7e14314898ebfb30a26e8fdec87c8b7d95898a458715885e8a5e3a19583ed Apr 23 18:23:55.724819 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:55.724777 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-89ff7-6f7fc5787f-7bn8w" event={"ID":"e5a67a37-ede8-4317-806e-a08a8901a495","Type":"ContainerStarted","Data":"3b854c4cf483f16cbcff3f2f28f1345d0b5362b662ba294b83229100bac33236"} Apr 23 18:23:55.724819 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:55.724819 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-89ff7-6f7fc5787f-7bn8w" event={"ID":"e5a67a37-ede8-4317-806e-a08a8901a495","Type":"ContainerStarted","Data":"21f7e14314898ebfb30a26e8fdec87c8b7d95898a458715885e8a5e3a19583ed"} Apr 23 18:23:55.725271 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:55.724961 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/sequence-graph-89ff7-6f7fc5787f-7bn8w" Apr 23 18:23:55.745397 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:23:55.745316 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/sequence-graph-89ff7-6f7fc5787f-7bn8w" podStartSLOduration=1.745300466 podStartE2EDuration="1.745300466s" podCreationTimestamp="2026-04-23 18:23:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:23:55.743544165 +0000 UTC m=+1543.041790297" watchObservedRunningTime="2026-04-23 18:23:55.745300466 +0000 UTC m=+1543.043546599" Apr 23 18:24:01.733203 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:01.733175 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/sequence-graph-89ff7-6f7fc5787f-7bn8w" Apr 23 18:24:04.638940 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:04.638903 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-89ff7-6f7fc5787f-7bn8w"] Apr 23 18:24:04.639473 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:04.639133 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/sequence-graph-89ff7-6f7fc5787f-7bn8w" podUID="e5a67a37-ede8-4317-806e-a08a8901a495" containerName="sequence-graph-89ff7" containerID="cri-o://3b854c4cf483f16cbcff3f2f28f1345d0b5362b662ba294b83229100bac33236" gracePeriod=30 Apr 23 18:24:06.731850 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:06.731804 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-89ff7-6f7fc5787f-7bn8w" podUID="e5a67a37-ede8-4317-806e-a08a8901a495" containerName="sequence-graph-89ff7" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:24:09.021527 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:09.021491 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-83978-d4f856f6f-x2xqt"] Apr 23 18:24:09.024570 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:09.024554 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-83978-d4f856f6f-x2xqt" Apr 23 18:24:09.027364 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:09.027343 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"ensemble-graph-83978-kube-rbac-proxy-sar-config\"" Apr 23 18:24:09.027483 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:09.027348 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"ensemble-graph-83978-serving-cert\"" Apr 23 18:24:09.034495 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:09.034470 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-83978-d4f856f6f-x2xqt"] Apr 23 18:24:09.108737 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:09.108673 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe-openshift-service-ca-bundle\") pod \"ensemble-graph-83978-d4f856f6f-x2xqt\" (UID: \"868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe\") " pod="kserve-ci-e2e-test/ensemble-graph-83978-d4f856f6f-x2xqt" Apr 23 18:24:09.108889 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:09.108868 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe-proxy-tls\") pod \"ensemble-graph-83978-d4f856f6f-x2xqt\" (UID: \"868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe\") " pod="kserve-ci-e2e-test/ensemble-graph-83978-d4f856f6f-x2xqt" Apr 23 18:24:09.209471 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:09.209431 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe-openshift-service-ca-bundle\") pod \"ensemble-graph-83978-d4f856f6f-x2xqt\" (UID: \"868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe\") " pod="kserve-ci-e2e-test/ensemble-graph-83978-d4f856f6f-x2xqt" Apr 23 18:24:09.209471 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:09.209477 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe-proxy-tls\") pod \"ensemble-graph-83978-d4f856f6f-x2xqt\" (UID: \"868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe\") " pod="kserve-ci-e2e-test/ensemble-graph-83978-d4f856f6f-x2xqt" Apr 23 18:24:09.209710 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:24:09.209584 2572 secret.go:189] Couldn't get secret kserve-ci-e2e-test/ensemble-graph-83978-serving-cert: secret "ensemble-graph-83978-serving-cert" not found Apr 23 18:24:09.209710 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:24:09.209645 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe-proxy-tls podName:868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe nodeName:}" failed. No retries permitted until 2026-04-23 18:24:09.709627148 +0000 UTC m=+1557.007873259 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe-proxy-tls") pod "ensemble-graph-83978-d4f856f6f-x2xqt" (UID: "868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe") : secret "ensemble-graph-83978-serving-cert" not found Apr 23 18:24:09.210152 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:09.210128 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe-openshift-service-ca-bundle\") pod \"ensemble-graph-83978-d4f856f6f-x2xqt\" (UID: \"868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe\") " pod="kserve-ci-e2e-test/ensemble-graph-83978-d4f856f6f-x2xqt" Apr 23 18:24:09.714581 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:09.714550 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe-proxy-tls\") pod \"ensemble-graph-83978-d4f856f6f-x2xqt\" (UID: \"868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe\") " pod="kserve-ci-e2e-test/ensemble-graph-83978-d4f856f6f-x2xqt" Apr 23 18:24:09.716789 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:09.716771 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe-proxy-tls\") pod \"ensemble-graph-83978-d4f856f6f-x2xqt\" (UID: \"868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe\") " pod="kserve-ci-e2e-test/ensemble-graph-83978-d4f856f6f-x2xqt" Apr 23 18:24:09.935380 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:09.935315 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-83978-d4f856f6f-x2xqt" Apr 23 18:24:10.054622 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:10.054598 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-83978-d4f856f6f-x2xqt"] Apr 23 18:24:10.057155 ip-10-0-130-162 kubenswrapper[2572]: W0423 18:24:10.057126 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod868c11ce_d2e3_4592_9d95_b9ac3f7ee8fe.slice/crio-d294e6b6e02a77b7a76e2e418458d6a339d70e8f6675cac65fb6b0dbd0362817 WatchSource:0}: Error finding container d294e6b6e02a77b7a76e2e418458d6a339d70e8f6675cac65fb6b0dbd0362817: Status 404 returned error can't find the container with id d294e6b6e02a77b7a76e2e418458d6a339d70e8f6675cac65fb6b0dbd0362817 Apr 23 18:24:10.771162 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:10.771128 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-83978-d4f856f6f-x2xqt" event={"ID":"868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe","Type":"ContainerStarted","Data":"4aa7c749c7dee5a3076bf149025eb4f0dbcfd055a54c6b9304689371417027b6"} Apr 23 18:24:10.771162 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:10.771163 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-83978-d4f856f6f-x2xqt" event={"ID":"868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe","Type":"ContainerStarted","Data":"d294e6b6e02a77b7a76e2e418458d6a339d70e8f6675cac65fb6b0dbd0362817"} Apr 23 18:24:10.771402 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:10.771186 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/ensemble-graph-83978-d4f856f6f-x2xqt" Apr 23 18:24:10.787887 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:10.787827 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/ensemble-graph-83978-d4f856f6f-x2xqt" podStartSLOduration=1.7878112750000001 podStartE2EDuration="1.787811275s" podCreationTimestamp="2026-04-23 18:24:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:24:10.787127052 +0000 UTC m=+1558.085373185" watchObservedRunningTime="2026-04-23 18:24:10.787811275 +0000 UTC m=+1558.086057407" Apr 23 18:24:11.732054 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:11.732009 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-89ff7-6f7fc5787f-7bn8w" podUID="e5a67a37-ede8-4317-806e-a08a8901a495" containerName="sequence-graph-89ff7" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:24:16.732117 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:16.732075 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-89ff7-6f7fc5787f-7bn8w" podUID="e5a67a37-ede8-4317-806e-a08a8901a495" containerName="sequence-graph-89ff7" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:24:16.732586 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:16.732198 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/sequence-graph-89ff7-6f7fc5787f-7bn8w" Apr 23 18:24:16.780132 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:16.780106 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/ensemble-graph-83978-d4f856f6f-x2xqt" Apr 23 18:24:21.732335 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:21.732285 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-89ff7-6f7fc5787f-7bn8w" podUID="e5a67a37-ede8-4317-806e-a08a8901a495" containerName="sequence-graph-89ff7" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:24:26.732546 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:26.732495 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-89ff7-6f7fc5787f-7bn8w" podUID="e5a67a37-ede8-4317-806e-a08a8901a495" containerName="sequence-graph-89ff7" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:24:31.731781 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:31.731741 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-89ff7-6f7fc5787f-7bn8w" podUID="e5a67a37-ede8-4317-806e-a08a8901a495" containerName="sequence-graph-89ff7" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:24:34.703462 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:24:34.703419 2572 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5a67a37_ede8_4317_806e_a08a8901a495.slice/crio-21f7e14314898ebfb30a26e8fdec87c8b7d95898a458715885e8a5e3a19583ed\": RecentStats: unable to find data in memory cache]" Apr 23 18:24:34.832567 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:34.832541 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-89ff7-6f7fc5787f-7bn8w" Apr 23 18:24:34.852227 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:34.852193 2572 generic.go:358] "Generic (PLEG): container finished" podID="e5a67a37-ede8-4317-806e-a08a8901a495" containerID="3b854c4cf483f16cbcff3f2f28f1345d0b5362b662ba294b83229100bac33236" exitCode=0 Apr 23 18:24:34.852384 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:34.852227 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-89ff7-6f7fc5787f-7bn8w" event={"ID":"e5a67a37-ede8-4317-806e-a08a8901a495","Type":"ContainerDied","Data":"3b854c4cf483f16cbcff3f2f28f1345d0b5362b662ba294b83229100bac33236"} Apr 23 18:24:34.852384 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:34.852266 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-89ff7-6f7fc5787f-7bn8w" Apr 23 18:24:34.852384 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:34.852278 2572 scope.go:117] "RemoveContainer" containerID="3b854c4cf483f16cbcff3f2f28f1345d0b5362b662ba294b83229100bac33236" Apr 23 18:24:34.852548 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:34.852266 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-89ff7-6f7fc5787f-7bn8w" event={"ID":"e5a67a37-ede8-4317-806e-a08a8901a495","Type":"ContainerDied","Data":"21f7e14314898ebfb30a26e8fdec87c8b7d95898a458715885e8a5e3a19583ed"} Apr 23 18:24:34.859766 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:34.859737 2572 scope.go:117] "RemoveContainer" containerID="3b854c4cf483f16cbcff3f2f28f1345d0b5362b662ba294b83229100bac33236" Apr 23 18:24:34.860001 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:24:34.859981 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b854c4cf483f16cbcff3f2f28f1345d0b5362b662ba294b83229100bac33236\": container with ID starting with 3b854c4cf483f16cbcff3f2f28f1345d0b5362b662ba294b83229100bac33236 not found: ID does not exist" containerID="3b854c4cf483f16cbcff3f2f28f1345d0b5362b662ba294b83229100bac33236" Apr 23 18:24:34.860082 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:34.860008 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b854c4cf483f16cbcff3f2f28f1345d0b5362b662ba294b83229100bac33236"} err="failed to get container status \"3b854c4cf483f16cbcff3f2f28f1345d0b5362b662ba294b83229100bac33236\": rpc error: code = NotFound desc = could not find container \"3b854c4cf483f16cbcff3f2f28f1345d0b5362b662ba294b83229100bac33236\": container with ID starting with 3b854c4cf483f16cbcff3f2f28f1345d0b5362b662ba294b83229100bac33236 not found: ID does not exist" Apr 23 18:24:34.895364 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:34.895282 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e5a67a37-ede8-4317-806e-a08a8901a495-proxy-tls\") pod \"e5a67a37-ede8-4317-806e-a08a8901a495\" (UID: \"e5a67a37-ede8-4317-806e-a08a8901a495\") " Apr 23 18:24:34.895364 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:34.895339 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5a67a37-ede8-4317-806e-a08a8901a495-openshift-service-ca-bundle\") pod \"e5a67a37-ede8-4317-806e-a08a8901a495\" (UID: \"e5a67a37-ede8-4317-806e-a08a8901a495\") " Apr 23 18:24:34.895714 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:34.895691 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5a67a37-ede8-4317-806e-a08a8901a495-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "e5a67a37-ede8-4317-806e-a08a8901a495" (UID: "e5a67a37-ede8-4317-806e-a08a8901a495"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:24:34.897272 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:34.897238 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a67a37-ede8-4317-806e-a08a8901a495-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e5a67a37-ede8-4317-806e-a08a8901a495" (UID: "e5a67a37-ede8-4317-806e-a08a8901a495"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:24:34.996019 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:34.995981 2572 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e5a67a37-ede8-4317-806e-a08a8901a495-proxy-tls\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:24:34.996019 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:34.996017 2572 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5a67a37-ede8-4317-806e-a08a8901a495-openshift-service-ca-bundle\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:24:35.046150 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:35.046123 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kube-system/global-pull-secret-syncer-cc6xb"] Apr 23 18:24:35.046438 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:35.046424 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e5a67a37-ede8-4317-806e-a08a8901a495" containerName="sequence-graph-89ff7" Apr 23 18:24:35.046484 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:35.046439 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5a67a37-ede8-4317-806e-a08a8901a495" containerName="sequence-graph-89ff7" Apr 23 18:24:35.046526 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:35.046497 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="e5a67a37-ede8-4317-806e-a08a8901a495" containerName="sequence-graph-89ff7" Apr 23 18:24:35.050525 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:35.050510 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-cc6xb" Apr 23 18:24:35.054664 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:35.054644 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"original-pull-secret\"" Apr 23 18:24:35.058706 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:35.058684 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-cc6xb"] Apr 23 18:24:35.180872 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:35.180845 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-89ff7-6f7fc5787f-7bn8w"] Apr 23 18:24:35.182179 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:35.182157 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-89ff7-6f7fc5787f-7bn8w"] Apr 23 18:24:35.198163 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:35.198144 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/33d63faf-1112-4576-8025-7dc6736d218a-kubelet-config\") pod \"global-pull-secret-syncer-cc6xb\" (UID: \"33d63faf-1112-4576-8025-7dc6736d218a\") " pod="kube-system/global-pull-secret-syncer-cc6xb" Apr 23 18:24:35.198234 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:35.198201 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/33d63faf-1112-4576-8025-7dc6736d218a-original-pull-secret\") pod \"global-pull-secret-syncer-cc6xb\" (UID: \"33d63faf-1112-4576-8025-7dc6736d218a\") " pod="kube-system/global-pull-secret-syncer-cc6xb" Apr 23 18:24:35.198270 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:35.198255 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/33d63faf-1112-4576-8025-7dc6736d218a-dbus\") pod \"global-pull-secret-syncer-cc6xb\" (UID: \"33d63faf-1112-4576-8025-7dc6736d218a\") " pod="kube-system/global-pull-secret-syncer-cc6xb" Apr 23 18:24:35.298831 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:35.298796 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/33d63faf-1112-4576-8025-7dc6736d218a-kubelet-config\") pod \"global-pull-secret-syncer-cc6xb\" (UID: \"33d63faf-1112-4576-8025-7dc6736d218a\") " pod="kube-system/global-pull-secret-syncer-cc6xb" Apr 23 18:24:35.299033 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:35.298874 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/33d63faf-1112-4576-8025-7dc6736d218a-original-pull-secret\") pod \"global-pull-secret-syncer-cc6xb\" (UID: \"33d63faf-1112-4576-8025-7dc6736d218a\") " pod="kube-system/global-pull-secret-syncer-cc6xb" Apr 23 18:24:35.299033 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:35.298902 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/33d63faf-1112-4576-8025-7dc6736d218a-dbus\") pod \"global-pull-secret-syncer-cc6xb\" (UID: \"33d63faf-1112-4576-8025-7dc6736d218a\") " pod="kube-system/global-pull-secret-syncer-cc6xb" Apr 23 18:24:35.299033 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:35.298914 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/33d63faf-1112-4576-8025-7dc6736d218a-kubelet-config\") pod \"global-pull-secret-syncer-cc6xb\" (UID: \"33d63faf-1112-4576-8025-7dc6736d218a\") " pod="kube-system/global-pull-secret-syncer-cc6xb" Apr 23 18:24:35.299262 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:35.299070 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/33d63faf-1112-4576-8025-7dc6736d218a-dbus\") pod \"global-pull-secret-syncer-cc6xb\" (UID: \"33d63faf-1112-4576-8025-7dc6736d218a\") " pod="kube-system/global-pull-secret-syncer-cc6xb" Apr 23 18:24:35.301310 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:35.301289 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/33d63faf-1112-4576-8025-7dc6736d218a-original-pull-secret\") pod \"global-pull-secret-syncer-cc6xb\" (UID: \"33d63faf-1112-4576-8025-7dc6736d218a\") " pod="kube-system/global-pull-secret-syncer-cc6xb" Apr 23 18:24:35.308268 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:35.308243 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5a67a37-ede8-4317-806e-a08a8901a495" path="/var/lib/kubelet/pods/e5a67a37-ede8-4317-806e-a08a8901a495/volumes" Apr 23 18:24:35.359941 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:35.359914 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-cc6xb" Apr 23 18:24:35.476950 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:35.476925 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-cc6xb"] Apr 23 18:24:35.478694 ip-10-0-130-162 kubenswrapper[2572]: W0423 18:24:35.478660 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33d63faf_1112_4576_8025_7dc6736d218a.slice/crio-1b820fc83cf1baa6e48ba4ace176530e3d1f7b625410885e51d14da0e6edb744 WatchSource:0}: Error finding container 1b820fc83cf1baa6e48ba4ace176530e3d1f7b625410885e51d14da0e6edb744: Status 404 returned error can't find the container with id 1b820fc83cf1baa6e48ba4ace176530e3d1f7b625410885e51d14da0e6edb744 Apr 23 18:24:35.857208 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:35.857174 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-cc6xb" event={"ID":"33d63faf-1112-4576-8025-7dc6736d218a","Type":"ContainerStarted","Data":"1b820fc83cf1baa6e48ba4ace176530e3d1f7b625410885e51d14da0e6edb744"} Apr 23 18:24:40.874533 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:40.874495 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-cc6xb" event={"ID":"33d63faf-1112-4576-8025-7dc6736d218a","Type":"ContainerStarted","Data":"d14de12235ef2e0c030d8d3fa26636c1aab3aa6e64f8d81960fcffe3a6254b83"} Apr 23 18:24:40.890383 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:24:40.890338 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/global-pull-secret-syncer-cc6xb" podStartSLOduration=1.554300117 podStartE2EDuration="5.890306852s" podCreationTimestamp="2026-04-23 18:24:35 +0000 UTC" firstStartedPulling="2026-04-23 18:24:35.480420067 +0000 UTC m=+1582.778666178" lastFinishedPulling="2026-04-23 18:24:39.816426795 +0000 UTC m=+1587.114672913" observedRunningTime="2026-04-23 18:24:40.888442242 +0000 UTC m=+1588.186688374" watchObservedRunningTime="2026-04-23 18:24:40.890306852 +0000 UTC m=+1588.188552998" Apr 23 18:25:14.890827 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:25:14.890790 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/sequence-graph-02052-76989ff5b8-vh6wr"] Apr 23 18:25:14.895379 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:25:14.895358 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-02052-76989ff5b8-vh6wr" Apr 23 18:25:14.897963 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:25:14.897939 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"sequence-graph-02052-serving-cert\"" Apr 23 18:25:14.898077 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:25:14.898005 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"sequence-graph-02052-kube-rbac-proxy-sar-config\"" Apr 23 18:25:14.901696 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:25:14.901657 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-02052-76989ff5b8-vh6wr"] Apr 23 18:25:15.089306 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:25:15.089248 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/336f8215-db45-4321-996c-bfa21c9b9094-proxy-tls\") pod \"sequence-graph-02052-76989ff5b8-vh6wr\" (UID: \"336f8215-db45-4321-996c-bfa21c9b9094\") " pod="kserve-ci-e2e-test/sequence-graph-02052-76989ff5b8-vh6wr" Apr 23 18:25:15.089498 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:25:15.089340 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/336f8215-db45-4321-996c-bfa21c9b9094-openshift-service-ca-bundle\") pod \"sequence-graph-02052-76989ff5b8-vh6wr\" (UID: \"336f8215-db45-4321-996c-bfa21c9b9094\") " pod="kserve-ci-e2e-test/sequence-graph-02052-76989ff5b8-vh6wr" Apr 23 18:25:15.189943 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:25:15.189849 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/336f8215-db45-4321-996c-bfa21c9b9094-openshift-service-ca-bundle\") pod \"sequence-graph-02052-76989ff5b8-vh6wr\" (UID: \"336f8215-db45-4321-996c-bfa21c9b9094\") " pod="kserve-ci-e2e-test/sequence-graph-02052-76989ff5b8-vh6wr" Apr 23 18:25:15.189943 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:25:15.189920 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/336f8215-db45-4321-996c-bfa21c9b9094-proxy-tls\") pod \"sequence-graph-02052-76989ff5b8-vh6wr\" (UID: \"336f8215-db45-4321-996c-bfa21c9b9094\") " pod="kserve-ci-e2e-test/sequence-graph-02052-76989ff5b8-vh6wr" Apr 23 18:25:15.190555 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:25:15.190531 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/336f8215-db45-4321-996c-bfa21c9b9094-openshift-service-ca-bundle\") pod \"sequence-graph-02052-76989ff5b8-vh6wr\" (UID: \"336f8215-db45-4321-996c-bfa21c9b9094\") " pod="kserve-ci-e2e-test/sequence-graph-02052-76989ff5b8-vh6wr" Apr 23 18:25:15.192252 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:25:15.192230 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/336f8215-db45-4321-996c-bfa21c9b9094-proxy-tls\") pod \"sequence-graph-02052-76989ff5b8-vh6wr\" (UID: \"336f8215-db45-4321-996c-bfa21c9b9094\") " pod="kserve-ci-e2e-test/sequence-graph-02052-76989ff5b8-vh6wr" Apr 23 18:25:15.207171 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:25:15.207150 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-02052-76989ff5b8-vh6wr" Apr 23 18:25:15.321706 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:25:15.321678 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-02052-76989ff5b8-vh6wr"] Apr 23 18:25:15.323164 ip-10-0-130-162 kubenswrapper[2572]: W0423 18:25:15.323135 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod336f8215_db45_4321_996c_bfa21c9b9094.slice/crio-b8d6fc7188cfe037b6682b7f24c8f609634c238e6c40a8b032a63c0fcbe0a2ec WatchSource:0}: Error finding container b8d6fc7188cfe037b6682b7f24c8f609634c238e6c40a8b032a63c0fcbe0a2ec: Status 404 returned error can't find the container with id b8d6fc7188cfe037b6682b7f24c8f609634c238e6c40a8b032a63c0fcbe0a2ec Apr 23 18:25:15.990959 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:25:15.990923 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-02052-76989ff5b8-vh6wr" event={"ID":"336f8215-db45-4321-996c-bfa21c9b9094","Type":"ContainerStarted","Data":"560ce28f7e76bf009545120daa89e88452ed2cda12c919674027445ccc1de9c6"} Apr 23 18:25:15.991356 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:25:15.990963 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-02052-76989ff5b8-vh6wr" event={"ID":"336f8215-db45-4321-996c-bfa21c9b9094","Type":"ContainerStarted","Data":"b8d6fc7188cfe037b6682b7f24c8f609634c238e6c40a8b032a63c0fcbe0a2ec"} Apr 23 18:25:15.991356 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:25:15.990996 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/sequence-graph-02052-76989ff5b8-vh6wr" Apr 23 18:25:16.008773 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:25:16.008725 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/sequence-graph-02052-76989ff5b8-vh6wr" podStartSLOduration=2.008712983 podStartE2EDuration="2.008712983s" podCreationTimestamp="2026-04-23 18:25:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:25:16.008062954 +0000 UTC m=+1623.306309088" watchObservedRunningTime="2026-04-23 18:25:16.008712983 +0000 UTC m=+1623.306959116" Apr 23 18:25:22.001826 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:25:22.001791 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/sequence-graph-02052-76989ff5b8-vh6wr" Apr 23 18:32:23.786816 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:32:23.786734 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-83978-d4f856f6f-x2xqt"] Apr 23 18:32:23.789219 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:32:23.786966 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/ensemble-graph-83978-d4f856f6f-x2xqt" podUID="868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe" containerName="ensemble-graph-83978" containerID="cri-o://4aa7c749c7dee5a3076bf149025eb4f0dbcfd055a54c6b9304689371417027b6" gracePeriod=30 Apr 23 18:32:26.779271 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:32:26.779235 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-83978-d4f856f6f-x2xqt" podUID="868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe" containerName="ensemble-graph-83978" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:32:31.778610 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:32:31.778570 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-83978-d4f856f6f-x2xqt" podUID="868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe" containerName="ensemble-graph-83978" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:32:36.779623 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:32:36.779585 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-83978-d4f856f6f-x2xqt" podUID="868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe" containerName="ensemble-graph-83978" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:32:36.780006 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:32:36.779699 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/ensemble-graph-83978-d4f856f6f-x2xqt" Apr 23 18:32:41.779523 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:32:41.779482 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-83978-d4f856f6f-x2xqt" podUID="868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe" containerName="ensemble-graph-83978" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:32:46.779550 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:32:46.779506 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-83978-d4f856f6f-x2xqt" podUID="868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe" containerName="ensemble-graph-83978" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:32:51.778921 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:32:51.778872 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-83978-d4f856f6f-x2xqt" podUID="868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe" containerName="ensemble-graph-83978" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:32:53.921258 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:32:53.921236 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-83978-d4f856f6f-x2xqt" Apr 23 18:32:54.021152 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:32:54.021122 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe-proxy-tls\") pod \"868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe\" (UID: \"868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe\") " Apr 23 18:32:54.021289 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:32:54.021205 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe-openshift-service-ca-bundle\") pod \"868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe\" (UID: \"868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe\") " Apr 23 18:32:54.021563 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:32:54.021542 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe" (UID: "868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:32:54.023252 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:32:54.023221 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe" (UID: "868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:32:54.122447 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:32:54.122375 2572 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe-openshift-service-ca-bundle\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:32:54.122447 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:32:54.122401 2572 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe-proxy-tls\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:32:54.396009 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:32:54.395925 2572 generic.go:358] "Generic (PLEG): container finished" podID="868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe" containerID="4aa7c749c7dee5a3076bf149025eb4f0dbcfd055a54c6b9304689371417027b6" exitCode=0 Apr 23 18:32:54.396009 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:32:54.395988 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-83978-d4f856f6f-x2xqt" Apr 23 18:32:54.396009 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:32:54.396006 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-83978-d4f856f6f-x2xqt" event={"ID":"868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe","Type":"ContainerDied","Data":"4aa7c749c7dee5a3076bf149025eb4f0dbcfd055a54c6b9304689371417027b6"} Apr 23 18:32:54.396221 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:32:54.396036 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-83978-d4f856f6f-x2xqt" event={"ID":"868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe","Type":"ContainerDied","Data":"d294e6b6e02a77b7a76e2e418458d6a339d70e8f6675cac65fb6b0dbd0362817"} Apr 23 18:32:54.396221 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:32:54.396054 2572 scope.go:117] "RemoveContainer" containerID="4aa7c749c7dee5a3076bf149025eb4f0dbcfd055a54c6b9304689371417027b6" Apr 23 18:32:54.405249 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:32:54.405228 2572 scope.go:117] "RemoveContainer" containerID="4aa7c749c7dee5a3076bf149025eb4f0dbcfd055a54c6b9304689371417027b6" Apr 23 18:32:54.405556 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:32:54.405533 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4aa7c749c7dee5a3076bf149025eb4f0dbcfd055a54c6b9304689371417027b6\": container with ID starting with 4aa7c749c7dee5a3076bf149025eb4f0dbcfd055a54c6b9304689371417027b6 not found: ID does not exist" containerID="4aa7c749c7dee5a3076bf149025eb4f0dbcfd055a54c6b9304689371417027b6" Apr 23 18:32:54.405631 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:32:54.405566 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aa7c749c7dee5a3076bf149025eb4f0dbcfd055a54c6b9304689371417027b6"} err="failed to get container status \"4aa7c749c7dee5a3076bf149025eb4f0dbcfd055a54c6b9304689371417027b6\": rpc error: code = NotFound desc = could not find container \"4aa7c749c7dee5a3076bf149025eb4f0dbcfd055a54c6b9304689371417027b6\": container with ID starting with 4aa7c749c7dee5a3076bf149025eb4f0dbcfd055a54c6b9304689371417027b6 not found: ID does not exist" Apr 23 18:32:54.418388 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:32:54.418367 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-83978-d4f856f6f-x2xqt"] Apr 23 18:32:54.422248 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:32:54.422228 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-83978-d4f856f6f-x2xqt"] Apr 23 18:32:55.307648 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:32:55.307614 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe" path="/var/lib/kubelet/pods/868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe/volumes" Apr 23 18:33:24.063878 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:24.063846 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/splitter-graph-60f5a-6466846f6-wtc9r"] Apr 23 18:33:24.064299 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:24.064118 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe" containerName="ensemble-graph-83978" Apr 23 18:33:24.064299 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:24.064128 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe" containerName="ensemble-graph-83978" Apr 23 18:33:24.064299 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:24.064181 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="868c11ce-d2e3-4592-9d95-b9ac3f7ee8fe" containerName="ensemble-graph-83978" Apr 23 18:33:24.067414 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:24.067393 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-60f5a-6466846f6-wtc9r" Apr 23 18:33:24.070018 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:24.069988 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"splitter-graph-60f5a-serving-cert\"" Apr 23 18:33:24.070142 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:24.070049 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"splitter-graph-60f5a-kube-rbac-proxy-sar-config\"" Apr 23 18:33:24.081478 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:24.081452 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-60f5a-6466846f6-wtc9r"] Apr 23 18:33:24.234685 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:24.234639 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76f98d5f-9515-49f1-aba2-10550e02e50c-openshift-service-ca-bundle\") pod \"splitter-graph-60f5a-6466846f6-wtc9r\" (UID: \"76f98d5f-9515-49f1-aba2-10550e02e50c\") " pod="kserve-ci-e2e-test/splitter-graph-60f5a-6466846f6-wtc9r" Apr 23 18:33:24.234685 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:24.234697 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/76f98d5f-9515-49f1-aba2-10550e02e50c-proxy-tls\") pod \"splitter-graph-60f5a-6466846f6-wtc9r\" (UID: \"76f98d5f-9515-49f1-aba2-10550e02e50c\") " pod="kserve-ci-e2e-test/splitter-graph-60f5a-6466846f6-wtc9r" Apr 23 18:33:24.335970 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:24.335878 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76f98d5f-9515-49f1-aba2-10550e02e50c-openshift-service-ca-bundle\") pod \"splitter-graph-60f5a-6466846f6-wtc9r\" (UID: \"76f98d5f-9515-49f1-aba2-10550e02e50c\") " pod="kserve-ci-e2e-test/splitter-graph-60f5a-6466846f6-wtc9r" Apr 23 18:33:24.335970 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:24.335934 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/76f98d5f-9515-49f1-aba2-10550e02e50c-proxy-tls\") pod \"splitter-graph-60f5a-6466846f6-wtc9r\" (UID: \"76f98d5f-9515-49f1-aba2-10550e02e50c\") " pod="kserve-ci-e2e-test/splitter-graph-60f5a-6466846f6-wtc9r" Apr 23 18:33:24.336149 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:33:24.336026 2572 secret.go:189] Couldn't get secret kserve-ci-e2e-test/splitter-graph-60f5a-serving-cert: secret "splitter-graph-60f5a-serving-cert" not found Apr 23 18:33:24.336149 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:33:24.336096 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76f98d5f-9515-49f1-aba2-10550e02e50c-proxy-tls podName:76f98d5f-9515-49f1-aba2-10550e02e50c nodeName:}" failed. No retries permitted until 2026-04-23 18:33:24.836079399 +0000 UTC m=+2112.134325511 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/76f98d5f-9515-49f1-aba2-10550e02e50c-proxy-tls") pod "splitter-graph-60f5a-6466846f6-wtc9r" (UID: "76f98d5f-9515-49f1-aba2-10550e02e50c") : secret "splitter-graph-60f5a-serving-cert" not found Apr 23 18:33:24.336544 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:24.336523 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76f98d5f-9515-49f1-aba2-10550e02e50c-openshift-service-ca-bundle\") pod \"splitter-graph-60f5a-6466846f6-wtc9r\" (UID: \"76f98d5f-9515-49f1-aba2-10550e02e50c\") " pod="kserve-ci-e2e-test/splitter-graph-60f5a-6466846f6-wtc9r" Apr 23 18:33:24.840610 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:24.840582 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/76f98d5f-9515-49f1-aba2-10550e02e50c-proxy-tls\") pod \"splitter-graph-60f5a-6466846f6-wtc9r\" (UID: \"76f98d5f-9515-49f1-aba2-10550e02e50c\") " pod="kserve-ci-e2e-test/splitter-graph-60f5a-6466846f6-wtc9r" Apr 23 18:33:24.842934 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:24.842904 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/76f98d5f-9515-49f1-aba2-10550e02e50c-proxy-tls\") pod \"splitter-graph-60f5a-6466846f6-wtc9r\" (UID: \"76f98d5f-9515-49f1-aba2-10550e02e50c\") " pod="kserve-ci-e2e-test/splitter-graph-60f5a-6466846f6-wtc9r" Apr 23 18:33:24.983340 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:24.983287 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-60f5a-6466846f6-wtc9r" Apr 23 18:33:25.104128 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:25.104105 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-60f5a-6466846f6-wtc9r"] Apr 23 18:33:25.106002 ip-10-0-130-162 kubenswrapper[2572]: W0423 18:33:25.105971 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76f98d5f_9515_49f1_aba2_10550e02e50c.slice/crio-8afd578b008abab40128561f2a762001dce32c073f4a66bb3998749b581dd070 WatchSource:0}: Error finding container 8afd578b008abab40128561f2a762001dce32c073f4a66bb3998749b581dd070: Status 404 returned error can't find the container with id 8afd578b008abab40128561f2a762001dce32c073f4a66bb3998749b581dd070 Apr 23 18:33:25.107785 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:25.107767 2572 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 18:33:25.489691 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:25.489656 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-60f5a-6466846f6-wtc9r" event={"ID":"76f98d5f-9515-49f1-aba2-10550e02e50c","Type":"ContainerStarted","Data":"dad0490b87f32ff6b25f21c3ca8f5a8f9d475051116f4736a8d1cb8f2c3e3d3f"} Apr 23 18:33:25.489848 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:25.489696 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-60f5a-6466846f6-wtc9r" event={"ID":"76f98d5f-9515-49f1-aba2-10550e02e50c","Type":"ContainerStarted","Data":"8afd578b008abab40128561f2a762001dce32c073f4a66bb3998749b581dd070"} Apr 23 18:33:25.489848 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:25.489790 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/splitter-graph-60f5a-6466846f6-wtc9r" Apr 23 18:33:25.509375 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:25.509313 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/splitter-graph-60f5a-6466846f6-wtc9r" podStartSLOduration=1.509299885 podStartE2EDuration="1.509299885s" podCreationTimestamp="2026-04-23 18:33:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:33:25.508100305 +0000 UTC m=+2112.806346439" watchObservedRunningTime="2026-04-23 18:33:25.509299885 +0000 UTC m=+2112.807546020" Apr 23 18:33:29.505655 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:29.505621 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-02052-76989ff5b8-vh6wr"] Apr 23 18:33:29.506044 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:29.505839 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/sequence-graph-02052-76989ff5b8-vh6wr" podUID="336f8215-db45-4321-996c-bfa21c9b9094" containerName="sequence-graph-02052" containerID="cri-o://560ce28f7e76bf009545120daa89e88452ed2cda12c919674027445ccc1de9c6" gracePeriod=30 Apr 23 18:33:31.498289 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:31.498261 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/splitter-graph-60f5a-6466846f6-wtc9r" Apr 23 18:33:31.999208 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:31.999168 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-02052-76989ff5b8-vh6wr" podUID="336f8215-db45-4321-996c-bfa21c9b9094" containerName="sequence-graph-02052" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:33:36.999051 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:36.999010 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-02052-76989ff5b8-vh6wr" podUID="336f8215-db45-4321-996c-bfa21c9b9094" containerName="sequence-graph-02052" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:33:38.155644 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:38.155611 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-60f5a-6466846f6-wtc9r"] Apr 23 18:33:38.156001 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:38.155892 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/splitter-graph-60f5a-6466846f6-wtc9r" podUID="76f98d5f-9515-49f1-aba2-10550e02e50c" containerName="splitter-graph-60f5a" containerID="cri-o://dad0490b87f32ff6b25f21c3ca8f5a8f9d475051116f4736a8d1cb8f2c3e3d3f" gracePeriod=30 Apr 23 18:33:41.497686 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:41.497642 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-60f5a-6466846f6-wtc9r" podUID="76f98d5f-9515-49f1-aba2-10550e02e50c" containerName="splitter-graph-60f5a" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:33:41.999231 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:41.999188 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-02052-76989ff5b8-vh6wr" podUID="336f8215-db45-4321-996c-bfa21c9b9094" containerName="sequence-graph-02052" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:33:41.999437 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:41.999307 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/sequence-graph-02052-76989ff5b8-vh6wr" Apr 23 18:33:46.497540 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:46.497496 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-60f5a-6466846f6-wtc9r" podUID="76f98d5f-9515-49f1-aba2-10550e02e50c" containerName="splitter-graph-60f5a" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:33:46.998667 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:46.998629 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-02052-76989ff5b8-vh6wr" podUID="336f8215-db45-4321-996c-bfa21c9b9094" containerName="sequence-graph-02052" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:33:51.497060 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:51.497025 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-60f5a-6466846f6-wtc9r" podUID="76f98d5f-9515-49f1-aba2-10550e02e50c" containerName="splitter-graph-60f5a" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:33:51.497547 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:51.497150 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/splitter-graph-60f5a-6466846f6-wtc9r" Apr 23 18:33:51.999182 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:51.999147 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-02052-76989ff5b8-vh6wr" podUID="336f8215-db45-4321-996c-bfa21c9b9094" containerName="sequence-graph-02052" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:33:56.497042 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:56.496956 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-60f5a-6466846f6-wtc9r" podUID="76f98d5f-9515-49f1-aba2-10550e02e50c" containerName="splitter-graph-60f5a" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:33:56.998689 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:56.998652 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-02052-76989ff5b8-vh6wr" podUID="336f8215-db45-4321-996c-bfa21c9b9094" containerName="sequence-graph-02052" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:33:59.598823 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:59.598792 2572 generic.go:358] "Generic (PLEG): container finished" podID="336f8215-db45-4321-996c-bfa21c9b9094" containerID="560ce28f7e76bf009545120daa89e88452ed2cda12c919674027445ccc1de9c6" exitCode=0 Apr 23 18:33:59.599155 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:59.598870 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-02052-76989ff5b8-vh6wr" event={"ID":"336f8215-db45-4321-996c-bfa21c9b9094","Type":"ContainerDied","Data":"560ce28f7e76bf009545120daa89e88452ed2cda12c919674027445ccc1de9c6"} Apr 23 18:33:59.643461 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:59.643437 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-02052-76989ff5b8-vh6wr" Apr 23 18:33:59.685478 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:59.685453 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/336f8215-db45-4321-996c-bfa21c9b9094-openshift-service-ca-bundle\") pod \"336f8215-db45-4321-996c-bfa21c9b9094\" (UID: \"336f8215-db45-4321-996c-bfa21c9b9094\") " Apr 23 18:33:59.685610 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:59.685499 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/336f8215-db45-4321-996c-bfa21c9b9094-proxy-tls\") pod \"336f8215-db45-4321-996c-bfa21c9b9094\" (UID: \"336f8215-db45-4321-996c-bfa21c9b9094\") " Apr 23 18:33:59.685788 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:59.685763 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/336f8215-db45-4321-996c-bfa21c9b9094-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "336f8215-db45-4321-996c-bfa21c9b9094" (UID: "336f8215-db45-4321-996c-bfa21c9b9094"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:33:59.687378 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:59.687360 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/336f8215-db45-4321-996c-bfa21c9b9094-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "336f8215-db45-4321-996c-bfa21c9b9094" (UID: "336f8215-db45-4321-996c-bfa21c9b9094"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:33:59.786180 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:59.786117 2572 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/336f8215-db45-4321-996c-bfa21c9b9094-openshift-service-ca-bundle\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:33:59.786180 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:33:59.786140 2572 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/336f8215-db45-4321-996c-bfa21c9b9094-proxy-tls\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:34:00.603016 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:00.602983 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-02052-76989ff5b8-vh6wr" event={"ID":"336f8215-db45-4321-996c-bfa21c9b9094","Type":"ContainerDied","Data":"b8d6fc7188cfe037b6682b7f24c8f609634c238e6c40a8b032a63c0fcbe0a2ec"} Apr 23 18:34:00.603431 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:00.603018 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-02052-76989ff5b8-vh6wr" Apr 23 18:34:00.603431 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:00.603025 2572 scope.go:117] "RemoveContainer" containerID="560ce28f7e76bf009545120daa89e88452ed2cda12c919674027445ccc1de9c6" Apr 23 18:34:00.624845 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:00.624820 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-02052-76989ff5b8-vh6wr"] Apr 23 18:34:00.626844 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:00.626823 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-02052-76989ff5b8-vh6wr"] Apr 23 18:34:01.307890 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:01.307853 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="336f8215-db45-4321-996c-bfa21c9b9094" path="/var/lib/kubelet/pods/336f8215-db45-4321-996c-bfa21c9b9094/volumes" Apr 23 18:34:01.497750 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:01.497709 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-60f5a-6466846f6-wtc9r" podUID="76f98d5f-9515-49f1-aba2-10550e02e50c" containerName="splitter-graph-60f5a" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:34:06.497434 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:06.497394 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-60f5a-6466846f6-wtc9r" podUID="76f98d5f-9515-49f1-aba2-10550e02e50c" containerName="splitter-graph-60f5a" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:34:08.304334 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:08.304300 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-60f5a-6466846f6-wtc9r" Apr 23 18:34:08.346142 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:08.346113 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/76f98d5f-9515-49f1-aba2-10550e02e50c-proxy-tls\") pod \"76f98d5f-9515-49f1-aba2-10550e02e50c\" (UID: \"76f98d5f-9515-49f1-aba2-10550e02e50c\") " Apr 23 18:34:08.346292 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:08.346151 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76f98d5f-9515-49f1-aba2-10550e02e50c-openshift-service-ca-bundle\") pod \"76f98d5f-9515-49f1-aba2-10550e02e50c\" (UID: \"76f98d5f-9515-49f1-aba2-10550e02e50c\") " Apr 23 18:34:08.346543 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:08.346517 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76f98d5f-9515-49f1-aba2-10550e02e50c-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "76f98d5f-9515-49f1-aba2-10550e02e50c" (UID: "76f98d5f-9515-49f1-aba2-10550e02e50c"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:34:08.348097 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:08.348076 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76f98d5f-9515-49f1-aba2-10550e02e50c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "76f98d5f-9515-49f1-aba2-10550e02e50c" (UID: "76f98d5f-9515-49f1-aba2-10550e02e50c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:34:08.447247 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:08.447173 2572 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/76f98d5f-9515-49f1-aba2-10550e02e50c-proxy-tls\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:34:08.447247 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:08.447207 2572 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76f98d5f-9515-49f1-aba2-10550e02e50c-openshift-service-ca-bundle\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:34:08.633804 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:08.633767 2572 generic.go:358] "Generic (PLEG): container finished" podID="76f98d5f-9515-49f1-aba2-10550e02e50c" containerID="dad0490b87f32ff6b25f21c3ca8f5a8f9d475051116f4736a8d1cb8f2c3e3d3f" exitCode=0 Apr 23 18:34:08.633955 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:08.633827 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-60f5a-6466846f6-wtc9r" Apr 23 18:34:08.633955 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:08.633830 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-60f5a-6466846f6-wtc9r" event={"ID":"76f98d5f-9515-49f1-aba2-10550e02e50c","Type":"ContainerDied","Data":"dad0490b87f32ff6b25f21c3ca8f5a8f9d475051116f4736a8d1cb8f2c3e3d3f"} Apr 23 18:34:08.633955 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:08.633862 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-60f5a-6466846f6-wtc9r" event={"ID":"76f98d5f-9515-49f1-aba2-10550e02e50c","Type":"ContainerDied","Data":"8afd578b008abab40128561f2a762001dce32c073f4a66bb3998749b581dd070"} Apr 23 18:34:08.633955 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:08.633879 2572 scope.go:117] "RemoveContainer" containerID="dad0490b87f32ff6b25f21c3ca8f5a8f9d475051116f4736a8d1cb8f2c3e3d3f" Apr 23 18:34:08.645963 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:08.645943 2572 scope.go:117] "RemoveContainer" containerID="dad0490b87f32ff6b25f21c3ca8f5a8f9d475051116f4736a8d1cb8f2c3e3d3f" Apr 23 18:34:08.646230 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:34:08.646210 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dad0490b87f32ff6b25f21c3ca8f5a8f9d475051116f4736a8d1cb8f2c3e3d3f\": container with ID starting with dad0490b87f32ff6b25f21c3ca8f5a8f9d475051116f4736a8d1cb8f2c3e3d3f not found: ID does not exist" containerID="dad0490b87f32ff6b25f21c3ca8f5a8f9d475051116f4736a8d1cb8f2c3e3d3f" Apr 23 18:34:08.646294 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:08.646239 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dad0490b87f32ff6b25f21c3ca8f5a8f9d475051116f4736a8d1cb8f2c3e3d3f"} err="failed to get container status \"dad0490b87f32ff6b25f21c3ca8f5a8f9d475051116f4736a8d1cb8f2c3e3d3f\": rpc error: code = NotFound desc = could not find container \"dad0490b87f32ff6b25f21c3ca8f5a8f9d475051116f4736a8d1cb8f2c3e3d3f\": container with ID starting with dad0490b87f32ff6b25f21c3ca8f5a8f9d475051116f4736a8d1cb8f2c3e3d3f not found: ID does not exist" Apr 23 18:34:08.657632 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:08.657609 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-60f5a-6466846f6-wtc9r"] Apr 23 18:34:08.661071 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:08.661051 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-60f5a-6466846f6-wtc9r"] Apr 23 18:34:09.308020 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:09.307986 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76f98d5f-9515-49f1-aba2-10550e02e50c" path="/var/lib/kubelet/pods/76f98d5f-9515-49f1-aba2-10550e02e50c/volumes" Apr 23 18:34:38.388411 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:38.388377 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/splitter-graph-760ed-85f67b648f-2djgr"] Apr 23 18:34:38.388835 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:38.388641 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="76f98d5f-9515-49f1-aba2-10550e02e50c" containerName="splitter-graph-60f5a" Apr 23 18:34:38.388835 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:38.388652 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="76f98d5f-9515-49f1-aba2-10550e02e50c" containerName="splitter-graph-60f5a" Apr 23 18:34:38.388835 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:38.388675 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="336f8215-db45-4321-996c-bfa21c9b9094" containerName="sequence-graph-02052" Apr 23 18:34:38.388835 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:38.388680 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="336f8215-db45-4321-996c-bfa21c9b9094" containerName="sequence-graph-02052" Apr 23 18:34:38.388835 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:38.388721 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="76f98d5f-9515-49f1-aba2-10550e02e50c" containerName="splitter-graph-60f5a" Apr 23 18:34:38.388835 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:38.388728 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="336f8215-db45-4321-996c-bfa21c9b9094" containerName="sequence-graph-02052" Apr 23 18:34:38.391455 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:38.391436 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-760ed-85f67b648f-2djgr" Apr 23 18:34:38.394082 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:38.394061 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"openshift-service-ca.crt\"" Apr 23 18:34:38.394187 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:38.394061 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"splitter-graph-760ed-kube-rbac-proxy-sar-config\"" Apr 23 18:34:38.394187 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:38.394093 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"default-dockercfg-b65vr\"" Apr 23 18:34:38.394187 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:38.394093 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"splitter-graph-760ed-serving-cert\"" Apr 23 18:34:38.398725 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:38.398625 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-760ed-85f67b648f-2djgr"] Apr 23 18:34:38.459015 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:38.458983 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/09342f26-fdec-4e8b-9720-f5252a85e1a3-proxy-tls\") pod \"splitter-graph-760ed-85f67b648f-2djgr\" (UID: \"09342f26-fdec-4e8b-9720-f5252a85e1a3\") " pod="kserve-ci-e2e-test/splitter-graph-760ed-85f67b648f-2djgr" Apr 23 18:34:38.459148 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:38.459025 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09342f26-fdec-4e8b-9720-f5252a85e1a3-openshift-service-ca-bundle\") pod \"splitter-graph-760ed-85f67b648f-2djgr\" (UID: \"09342f26-fdec-4e8b-9720-f5252a85e1a3\") " pod="kserve-ci-e2e-test/splitter-graph-760ed-85f67b648f-2djgr" Apr 23 18:34:38.559810 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:38.559777 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/09342f26-fdec-4e8b-9720-f5252a85e1a3-proxy-tls\") pod \"splitter-graph-760ed-85f67b648f-2djgr\" (UID: \"09342f26-fdec-4e8b-9720-f5252a85e1a3\") " pod="kserve-ci-e2e-test/splitter-graph-760ed-85f67b648f-2djgr" Apr 23 18:34:38.559810 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:38.559815 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09342f26-fdec-4e8b-9720-f5252a85e1a3-openshift-service-ca-bundle\") pod \"splitter-graph-760ed-85f67b648f-2djgr\" (UID: \"09342f26-fdec-4e8b-9720-f5252a85e1a3\") " pod="kserve-ci-e2e-test/splitter-graph-760ed-85f67b648f-2djgr" Apr 23 18:34:38.559997 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:34:38.559938 2572 secret.go:189] Couldn't get secret kserve-ci-e2e-test/splitter-graph-760ed-serving-cert: secret "splitter-graph-760ed-serving-cert" not found Apr 23 18:34:38.560033 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:34:38.560018 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09342f26-fdec-4e8b-9720-f5252a85e1a3-proxy-tls podName:09342f26-fdec-4e8b-9720-f5252a85e1a3 nodeName:}" failed. No retries permitted until 2026-04-23 18:34:39.06000022 +0000 UTC m=+2186.358246338 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/09342f26-fdec-4e8b-9720-f5252a85e1a3-proxy-tls") pod "splitter-graph-760ed-85f67b648f-2djgr" (UID: "09342f26-fdec-4e8b-9720-f5252a85e1a3") : secret "splitter-graph-760ed-serving-cert" not found Apr 23 18:34:38.560390 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:38.560374 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09342f26-fdec-4e8b-9720-f5252a85e1a3-openshift-service-ca-bundle\") pod \"splitter-graph-760ed-85f67b648f-2djgr\" (UID: \"09342f26-fdec-4e8b-9720-f5252a85e1a3\") " pod="kserve-ci-e2e-test/splitter-graph-760ed-85f67b648f-2djgr" Apr 23 18:34:39.064338 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:39.064282 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/09342f26-fdec-4e8b-9720-f5252a85e1a3-proxy-tls\") pod \"splitter-graph-760ed-85f67b648f-2djgr\" (UID: \"09342f26-fdec-4e8b-9720-f5252a85e1a3\") " pod="kserve-ci-e2e-test/splitter-graph-760ed-85f67b648f-2djgr" Apr 23 18:34:39.066744 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:39.066710 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/09342f26-fdec-4e8b-9720-f5252a85e1a3-proxy-tls\") pod \"splitter-graph-760ed-85f67b648f-2djgr\" (UID: \"09342f26-fdec-4e8b-9720-f5252a85e1a3\") " pod="kserve-ci-e2e-test/splitter-graph-760ed-85f67b648f-2djgr" Apr 23 18:34:39.306791 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:39.306750 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-760ed-85f67b648f-2djgr" Apr 23 18:34:39.427212 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:39.427069 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-760ed-85f67b648f-2djgr"] Apr 23 18:34:39.429377 ip-10-0-130-162 kubenswrapper[2572]: W0423 18:34:39.429342 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09342f26_fdec_4e8b_9720_f5252a85e1a3.slice/crio-c0d8887b48c6ccdcc7d0b8e57c77c8181f9ffda3bf1b2c4534c61d788c163da8 WatchSource:0}: Error finding container c0d8887b48c6ccdcc7d0b8e57c77c8181f9ffda3bf1b2c4534c61d788c163da8: Status 404 returned error can't find the container with id c0d8887b48c6ccdcc7d0b8e57c77c8181f9ffda3bf1b2c4534c61d788c163da8 Apr 23 18:34:39.709764 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:39.709693 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/switch-graph-febb8-7bb468f96c-5bnl7"] Apr 23 18:34:39.713225 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:39.713200 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-febb8-7bb468f96c-5bnl7" Apr 23 18:34:39.715815 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:39.715790 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"switch-graph-febb8-serving-cert\"" Apr 23 18:34:39.715947 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:39.715826 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"switch-graph-febb8-kube-rbac-proxy-sar-config\"" Apr 23 18:34:39.718969 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:39.718943 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/switch-graph-febb8-7bb468f96c-5bnl7"] Apr 23 18:34:39.727227 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:39.727205 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-760ed-85f67b648f-2djgr" event={"ID":"09342f26-fdec-4e8b-9720-f5252a85e1a3","Type":"ContainerStarted","Data":"89918f25011b2a69be6b791e14cff3aaa32212537b082bd5e315c2f124c237c0"} Apr 23 18:34:39.727311 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:39.727235 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-760ed-85f67b648f-2djgr" event={"ID":"09342f26-fdec-4e8b-9720-f5252a85e1a3","Type":"ContainerStarted","Data":"c0d8887b48c6ccdcc7d0b8e57c77c8181f9ffda3bf1b2c4534c61d788c163da8"} Apr 23 18:34:39.727311 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:39.727297 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/splitter-graph-760ed-85f67b648f-2djgr" Apr 23 18:34:39.746208 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:39.746169 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/splitter-graph-760ed-85f67b648f-2djgr" podStartSLOduration=1.746158466 podStartE2EDuration="1.746158466s" podCreationTimestamp="2026-04-23 18:34:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:34:39.745660507 +0000 UTC m=+2187.043906666" watchObservedRunningTime="2026-04-23 18:34:39.746158466 +0000 UTC m=+2187.044404599" Apr 23 18:34:39.768812 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:39.768788 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e49426a4-15c8-4789-8200-264dc7c9c076-proxy-tls\") pod \"switch-graph-febb8-7bb468f96c-5bnl7\" (UID: \"e49426a4-15c8-4789-8200-264dc7c9c076\") " pod="kserve-ci-e2e-test/switch-graph-febb8-7bb468f96c-5bnl7" Apr 23 18:34:39.768916 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:39.768831 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e49426a4-15c8-4789-8200-264dc7c9c076-openshift-service-ca-bundle\") pod \"switch-graph-febb8-7bb468f96c-5bnl7\" (UID: \"e49426a4-15c8-4789-8200-264dc7c9c076\") " pod="kserve-ci-e2e-test/switch-graph-febb8-7bb468f96c-5bnl7" Apr 23 18:34:39.869585 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:39.869553 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e49426a4-15c8-4789-8200-264dc7c9c076-proxy-tls\") pod \"switch-graph-febb8-7bb468f96c-5bnl7\" (UID: \"e49426a4-15c8-4789-8200-264dc7c9c076\") " pod="kserve-ci-e2e-test/switch-graph-febb8-7bb468f96c-5bnl7" Apr 23 18:34:39.869740 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:39.869598 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e49426a4-15c8-4789-8200-264dc7c9c076-openshift-service-ca-bundle\") pod \"switch-graph-febb8-7bb468f96c-5bnl7\" (UID: \"e49426a4-15c8-4789-8200-264dc7c9c076\") " pod="kserve-ci-e2e-test/switch-graph-febb8-7bb468f96c-5bnl7" Apr 23 18:34:39.869740 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:34:39.869714 2572 secret.go:189] Couldn't get secret kserve-ci-e2e-test/switch-graph-febb8-serving-cert: secret "switch-graph-febb8-serving-cert" not found Apr 23 18:34:39.869862 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:34:39.869806 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e49426a4-15c8-4789-8200-264dc7c9c076-proxy-tls podName:e49426a4-15c8-4789-8200-264dc7c9c076 nodeName:}" failed. No retries permitted until 2026-04-23 18:34:40.369783123 +0000 UTC m=+2187.668029248 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/e49426a4-15c8-4789-8200-264dc7c9c076-proxy-tls") pod "switch-graph-febb8-7bb468f96c-5bnl7" (UID: "e49426a4-15c8-4789-8200-264dc7c9c076") : secret "switch-graph-febb8-serving-cert" not found Apr 23 18:34:39.870139 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:39.870120 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e49426a4-15c8-4789-8200-264dc7c9c076-openshift-service-ca-bundle\") pod \"switch-graph-febb8-7bb468f96c-5bnl7\" (UID: \"e49426a4-15c8-4789-8200-264dc7c9c076\") " pod="kserve-ci-e2e-test/switch-graph-febb8-7bb468f96c-5bnl7" Apr 23 18:34:40.373513 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:40.373460 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e49426a4-15c8-4789-8200-264dc7c9c076-proxy-tls\") pod \"switch-graph-febb8-7bb468f96c-5bnl7\" (UID: \"e49426a4-15c8-4789-8200-264dc7c9c076\") " pod="kserve-ci-e2e-test/switch-graph-febb8-7bb468f96c-5bnl7" Apr 23 18:34:40.375874 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:40.375854 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e49426a4-15c8-4789-8200-264dc7c9c076-proxy-tls\") pod \"switch-graph-febb8-7bb468f96c-5bnl7\" (UID: \"e49426a4-15c8-4789-8200-264dc7c9c076\") " pod="kserve-ci-e2e-test/switch-graph-febb8-7bb468f96c-5bnl7" Apr 23 18:34:40.624501 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:40.624409 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-febb8-7bb468f96c-5bnl7" Apr 23 18:34:40.739461 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:40.739438 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/switch-graph-febb8-7bb468f96c-5bnl7"] Apr 23 18:34:40.741697 ip-10-0-130-162 kubenswrapper[2572]: W0423 18:34:40.741669 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode49426a4_15c8_4789_8200_264dc7c9c076.slice/crio-b48c1e690f6819f4e4ec81cfeb1b9a367553e372116ef7e0dac11db0828ec132 WatchSource:0}: Error finding container b48c1e690f6819f4e4ec81cfeb1b9a367553e372116ef7e0dac11db0828ec132: Status 404 returned error can't find the container with id b48c1e690f6819f4e4ec81cfeb1b9a367553e372116ef7e0dac11db0828ec132 Apr 23 18:34:41.733767 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:41.733732 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-febb8-7bb468f96c-5bnl7" event={"ID":"e49426a4-15c8-4789-8200-264dc7c9c076","Type":"ContainerStarted","Data":"dafaf78d78f1e1dcf095c564c69a7faf93f47f4d71fc7a37138f3796592fd2e0"} Apr 23 18:34:41.733767 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:41.733769 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-febb8-7bb468f96c-5bnl7" event={"ID":"e49426a4-15c8-4789-8200-264dc7c9c076","Type":"ContainerStarted","Data":"b48c1e690f6819f4e4ec81cfeb1b9a367553e372116ef7e0dac11db0828ec132"} Apr 23 18:34:41.734177 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:41.733841 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/switch-graph-febb8-7bb468f96c-5bnl7" Apr 23 18:34:41.751854 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:41.751807 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/switch-graph-febb8-7bb468f96c-5bnl7" podStartSLOduration=2.751790469 podStartE2EDuration="2.751790469s" podCreationTimestamp="2026-04-23 18:34:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:34:41.750842185 +0000 UTC m=+2189.049088325" watchObservedRunningTime="2026-04-23 18:34:41.751790469 +0000 UTC m=+2189.050036604" Apr 23 18:34:45.735503 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:45.735471 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/splitter-graph-760ed-85f67b648f-2djgr" Apr 23 18:34:47.743475 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:34:47.743437 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/switch-graph-febb8-7bb468f96c-5bnl7" Apr 23 18:42:53.107868 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:42:53.107837 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-760ed-85f67b648f-2djgr"] Apr 23 18:42:53.110220 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:42:53.108061 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/splitter-graph-760ed-85f67b648f-2djgr" podUID="09342f26-fdec-4e8b-9720-f5252a85e1a3" containerName="splitter-graph-760ed" containerID="cri-o://89918f25011b2a69be6b791e14cff3aaa32212537b082bd5e315c2f124c237c0" gracePeriod=30 Apr 23 18:42:55.734670 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:42:55.734627 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-760ed-85f67b648f-2djgr" podUID="09342f26-fdec-4e8b-9720-f5252a85e1a3" containerName="splitter-graph-760ed" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:43:00.733954 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:43:00.733911 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-760ed-85f67b648f-2djgr" podUID="09342f26-fdec-4e8b-9720-f5252a85e1a3" containerName="splitter-graph-760ed" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:43:05.734616 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:43:05.734574 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-760ed-85f67b648f-2djgr" podUID="09342f26-fdec-4e8b-9720-f5252a85e1a3" containerName="splitter-graph-760ed" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:43:05.735026 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:43:05.734679 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/splitter-graph-760ed-85f67b648f-2djgr" Apr 23 18:43:10.734610 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:43:10.734563 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-760ed-85f67b648f-2djgr" podUID="09342f26-fdec-4e8b-9720-f5252a85e1a3" containerName="splitter-graph-760ed" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:43:15.734516 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:43:15.734465 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-760ed-85f67b648f-2djgr" podUID="09342f26-fdec-4e8b-9720-f5252a85e1a3" containerName="splitter-graph-760ed" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:43:20.734170 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:43:20.734123 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-760ed-85f67b648f-2djgr" podUID="09342f26-fdec-4e8b-9720-f5252a85e1a3" containerName="splitter-graph-760ed" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:43:23.276060 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:43:23.276037 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-760ed-85f67b648f-2djgr" Apr 23 18:43:23.312049 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:43:23.312023 2572 generic.go:358] "Generic (PLEG): container finished" podID="09342f26-fdec-4e8b-9720-f5252a85e1a3" containerID="89918f25011b2a69be6b791e14cff3aaa32212537b082bd5e315c2f124c237c0" exitCode=0 Apr 23 18:43:23.312177 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:43:23.312076 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-760ed-85f67b648f-2djgr" Apr 23 18:43:23.312177 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:43:23.312098 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-760ed-85f67b648f-2djgr" event={"ID":"09342f26-fdec-4e8b-9720-f5252a85e1a3","Type":"ContainerDied","Data":"89918f25011b2a69be6b791e14cff3aaa32212537b082bd5e315c2f124c237c0"} Apr 23 18:43:23.312177 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:43:23.312136 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-760ed-85f67b648f-2djgr" event={"ID":"09342f26-fdec-4e8b-9720-f5252a85e1a3","Type":"ContainerDied","Data":"c0d8887b48c6ccdcc7d0b8e57c77c8181f9ffda3bf1b2c4534c61d788c163da8"} Apr 23 18:43:23.312177 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:43:23.312156 2572 scope.go:117] "RemoveContainer" containerID="89918f25011b2a69be6b791e14cff3aaa32212537b082bd5e315c2f124c237c0" Apr 23 18:43:23.319101 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:43:23.319086 2572 scope.go:117] "RemoveContainer" containerID="89918f25011b2a69be6b791e14cff3aaa32212537b082bd5e315c2f124c237c0" Apr 23 18:43:23.319358 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:43:23.319338 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89918f25011b2a69be6b791e14cff3aaa32212537b082bd5e315c2f124c237c0\": container with ID starting with 89918f25011b2a69be6b791e14cff3aaa32212537b082bd5e315c2f124c237c0 not found: ID does not exist" containerID="89918f25011b2a69be6b791e14cff3aaa32212537b082bd5e315c2f124c237c0" Apr 23 18:43:23.319417 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:43:23.319366 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89918f25011b2a69be6b791e14cff3aaa32212537b082bd5e315c2f124c237c0"} err="failed to get container status \"89918f25011b2a69be6b791e14cff3aaa32212537b082bd5e315c2f124c237c0\": rpc error: code = NotFound desc = could not find container \"89918f25011b2a69be6b791e14cff3aaa32212537b082bd5e315c2f124c237c0\": container with ID starting with 89918f25011b2a69be6b791e14cff3aaa32212537b082bd5e315c2f124c237c0 not found: ID does not exist" Apr 23 18:43:23.375722 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:43:23.375661 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/09342f26-fdec-4e8b-9720-f5252a85e1a3-proxy-tls\") pod \"09342f26-fdec-4e8b-9720-f5252a85e1a3\" (UID: \"09342f26-fdec-4e8b-9720-f5252a85e1a3\") " Apr 23 18:43:23.375824 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:43:23.375730 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09342f26-fdec-4e8b-9720-f5252a85e1a3-openshift-service-ca-bundle\") pod \"09342f26-fdec-4e8b-9720-f5252a85e1a3\" (UID: \"09342f26-fdec-4e8b-9720-f5252a85e1a3\") " Apr 23 18:43:23.376093 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:43:23.376070 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09342f26-fdec-4e8b-9720-f5252a85e1a3-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "09342f26-fdec-4e8b-9720-f5252a85e1a3" (UID: "09342f26-fdec-4e8b-9720-f5252a85e1a3"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:43:23.377768 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:43:23.377750 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09342f26-fdec-4e8b-9720-f5252a85e1a3-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "09342f26-fdec-4e8b-9720-f5252a85e1a3" (UID: "09342f26-fdec-4e8b-9720-f5252a85e1a3"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:43:23.477257 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:43:23.477222 2572 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/09342f26-fdec-4e8b-9720-f5252a85e1a3-proxy-tls\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:43:23.477257 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:43:23.477255 2572 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09342f26-fdec-4e8b-9720-f5252a85e1a3-openshift-service-ca-bundle\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:43:23.632868 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:43:23.632802 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-760ed-85f67b648f-2djgr"] Apr 23 18:43:23.636749 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:43:23.636724 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-760ed-85f67b648f-2djgr"] Apr 23 18:43:25.307867 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:43:25.307834 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09342f26-fdec-4e8b-9720-f5252a85e1a3" path="/var/lib/kubelet/pods/09342f26-fdec-4e8b-9720-f5252a85e1a3/volumes" Apr 23 18:50:59.053426 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:50:59.053384 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/switch-graph-febb8-7bb468f96c-5bnl7"] Apr 23 18:50:59.055810 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:50:59.053687 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/switch-graph-febb8-7bb468f96c-5bnl7" podUID="e49426a4-15c8-4789-8200-264dc7c9c076" containerName="switch-graph-febb8" containerID="cri-o://dafaf78d78f1e1dcf095c564c69a7faf93f47f4d71fc7a37138f3796592fd2e0" gracePeriod=30 Apr 23 18:51:00.471349 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:00.471303 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kl8lw/must-gather-nd9hx"] Apr 23 18:51:00.471709 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:00.471594 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="09342f26-fdec-4e8b-9720-f5252a85e1a3" containerName="splitter-graph-760ed" Apr 23 18:51:00.471709 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:00.471604 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="09342f26-fdec-4e8b-9720-f5252a85e1a3" containerName="splitter-graph-760ed" Apr 23 18:51:00.471709 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:00.471650 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="09342f26-fdec-4e8b-9720-f5252a85e1a3" containerName="splitter-graph-760ed" Apr 23 18:51:00.474438 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:00.474421 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kl8lw/must-gather-nd9hx" Apr 23 18:51:00.477087 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:00.477063 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-kl8lw\"/\"kube-root-ca.crt\"" Apr 23 18:51:00.477194 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:00.477172 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-kl8lw\"/\"default-dockercfg-wvx85\"" Apr 23 18:51:00.478255 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:00.478234 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-kl8lw\"/\"openshift-service-ca.crt\"" Apr 23 18:51:00.490369 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:00.490314 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kl8lw/must-gather-nd9hx"] Apr 23 18:51:00.565790 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:00.565760 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1f942549-5168-4c2b-a167-28a80a862d03-must-gather-output\") pod \"must-gather-nd9hx\" (UID: \"1f942549-5168-4c2b-a167-28a80a862d03\") " pod="openshift-must-gather-kl8lw/must-gather-nd9hx" Apr 23 18:51:00.565790 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:00.565791 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td24f\" (UniqueName: \"kubernetes.io/projected/1f942549-5168-4c2b-a167-28a80a862d03-kube-api-access-td24f\") pod \"must-gather-nd9hx\" (UID: \"1f942549-5168-4c2b-a167-28a80a862d03\") " pod="openshift-must-gather-kl8lw/must-gather-nd9hx" Apr 23 18:51:00.666961 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:00.666924 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1f942549-5168-4c2b-a167-28a80a862d03-must-gather-output\") pod \"must-gather-nd9hx\" (UID: \"1f942549-5168-4c2b-a167-28a80a862d03\") " pod="openshift-must-gather-kl8lw/must-gather-nd9hx" Apr 23 18:51:00.666961 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:00.666961 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-td24f\" (UniqueName: \"kubernetes.io/projected/1f942549-5168-4c2b-a167-28a80a862d03-kube-api-access-td24f\") pod \"must-gather-nd9hx\" (UID: \"1f942549-5168-4c2b-a167-28a80a862d03\") " pod="openshift-must-gather-kl8lw/must-gather-nd9hx" Apr 23 18:51:00.667342 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:00.667306 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1f942549-5168-4c2b-a167-28a80a862d03-must-gather-output\") pod \"must-gather-nd9hx\" (UID: \"1f942549-5168-4c2b-a167-28a80a862d03\") " pod="openshift-must-gather-kl8lw/must-gather-nd9hx" Apr 23 18:51:00.676424 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:00.676406 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-td24f\" (UniqueName: \"kubernetes.io/projected/1f942549-5168-4c2b-a167-28a80a862d03-kube-api-access-td24f\") pod \"must-gather-nd9hx\" (UID: \"1f942549-5168-4c2b-a167-28a80a862d03\") " pod="openshift-must-gather-kl8lw/must-gather-nd9hx" Apr 23 18:51:00.791396 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:00.791366 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kl8lw/must-gather-nd9hx" Apr 23 18:51:00.904111 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:00.904087 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kl8lw/must-gather-nd9hx"] Apr 23 18:51:00.906636 ip-10-0-130-162 kubenswrapper[2572]: W0423 18:51:00.906609 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f942549_5168_4c2b_a167_28a80a862d03.slice/crio-a62c4536f9d1197f85028db7d9027fcced8cae7e36ebedecad99f1a61369a5de WatchSource:0}: Error finding container a62c4536f9d1197f85028db7d9027fcced8cae7e36ebedecad99f1a61369a5de: Status 404 returned error can't find the container with id a62c4536f9d1197f85028db7d9027fcced8cae7e36ebedecad99f1a61369a5de Apr 23 18:51:00.908259 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:00.908245 2572 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 18:51:01.573162 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:01.573112 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kl8lw/must-gather-nd9hx" event={"ID":"1f942549-5168-4c2b-a167-28a80a862d03","Type":"ContainerStarted","Data":"a62c4536f9d1197f85028db7d9027fcced8cae7e36ebedecad99f1a61369a5de"} Apr 23 18:51:02.742524 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:02.742470 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-febb8-7bb468f96c-5bnl7" podUID="e49426a4-15c8-4789-8200-264dc7c9c076" containerName="switch-graph-febb8" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:51:05.588892 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:05.588848 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kl8lw/must-gather-nd9hx" event={"ID":"1f942549-5168-4c2b-a167-28a80a862d03","Type":"ContainerStarted","Data":"c59f73496abefb0fdc7dde2ffe2509be6cb4f87d5df3c4c797c09743eaa59e00"} Apr 23 18:51:05.589306 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:05.588900 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kl8lw/must-gather-nd9hx" event={"ID":"1f942549-5168-4c2b-a167-28a80a862d03","Type":"ContainerStarted","Data":"bb0378b14c284dd48c4523303853466df8f6d4d0c5d61f08e54e483c8bdb0261"} Apr 23 18:51:05.606057 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:05.606006 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kl8lw/must-gather-nd9hx" podStartSLOduration=1.57513615 podStartE2EDuration="5.605990682s" podCreationTimestamp="2026-04-23 18:51:00 +0000 UTC" firstStartedPulling="2026-04-23 18:51:00.908388675 +0000 UTC m=+3168.206634785" lastFinishedPulling="2026-04-23 18:51:04.939243192 +0000 UTC m=+3172.237489317" observedRunningTime="2026-04-23 18:51:05.605042697 +0000 UTC m=+3172.903288830" watchObservedRunningTime="2026-04-23 18:51:05.605990682 +0000 UTC m=+3172.904236816" Apr 23 18:51:07.741997 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:07.741959 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-febb8-7bb468f96c-5bnl7" podUID="e49426a4-15c8-4789-8200-264dc7c9c076" containerName="switch-graph-febb8" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:51:12.741414 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:12.741369 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-febb8-7bb468f96c-5bnl7" podUID="e49426a4-15c8-4789-8200-264dc7c9c076" containerName="switch-graph-febb8" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:51:12.741863 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:12.741489 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/switch-graph-febb8-7bb468f96c-5bnl7" Apr 23 18:51:13.490104 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:13.490070 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-febb8-7bb468f96c-5bnl7_e49426a4-15c8-4789-8200-264dc7c9c076/switch-graph-febb8/0.log" Apr 23 18:51:14.285095 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:14.285064 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-febb8-7bb468f96c-5bnl7_e49426a4-15c8-4789-8200-264dc7c9c076/switch-graph-febb8/0.log" Apr 23 18:51:15.086039 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:15.086008 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-febb8-7bb468f96c-5bnl7_e49426a4-15c8-4789-8200-264dc7c9c076/switch-graph-febb8/0.log" Apr 23 18:51:15.857777 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:15.857748 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-febb8-7bb468f96c-5bnl7_e49426a4-15c8-4789-8200-264dc7c9c076/switch-graph-febb8/0.log" Apr 23 18:51:16.721031 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:16.721000 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-febb8-7bb468f96c-5bnl7_e49426a4-15c8-4789-8200-264dc7c9c076/switch-graph-febb8/0.log" Apr 23 18:51:17.581704 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:17.581663 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-febb8-7bb468f96c-5bnl7_e49426a4-15c8-4789-8200-264dc7c9c076/switch-graph-febb8/0.log" Apr 23 18:51:17.740568 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:17.740534 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-febb8-7bb468f96c-5bnl7" podUID="e49426a4-15c8-4789-8200-264dc7c9c076" containerName="switch-graph-febb8" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:51:18.423767 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:18.423735 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-febb8-7bb468f96c-5bnl7_e49426a4-15c8-4789-8200-264dc7c9c076/switch-graph-febb8/0.log" Apr 23 18:51:19.268067 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:19.268035 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-febb8-7bb468f96c-5bnl7_e49426a4-15c8-4789-8200-264dc7c9c076/switch-graph-febb8/0.log" Apr 23 18:51:20.080501 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:20.080463 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-febb8-7bb468f96c-5bnl7_e49426a4-15c8-4789-8200-264dc7c9c076/switch-graph-febb8/0.log" Apr 23 18:51:20.875963 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:20.875935 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-febb8-7bb468f96c-5bnl7_e49426a4-15c8-4789-8200-264dc7c9c076/switch-graph-febb8/0.log" Apr 23 18:51:21.669180 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:21.669131 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-febb8-7bb468f96c-5bnl7_e49426a4-15c8-4789-8200-264dc7c9c076/switch-graph-febb8/0.log" Apr 23 18:51:22.509594 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:22.509557 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-febb8-7bb468f96c-5bnl7_e49426a4-15c8-4789-8200-264dc7c9c076/switch-graph-febb8/0.log" Apr 23 18:51:22.741107 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:22.741054 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-febb8-7bb468f96c-5bnl7" podUID="e49426a4-15c8-4789-8200-264dc7c9c076" containerName="switch-graph-febb8" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:51:23.651571 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:23.651486 2572 generic.go:358] "Generic (PLEG): container finished" podID="1f942549-5168-4c2b-a167-28a80a862d03" containerID="bb0378b14c284dd48c4523303853466df8f6d4d0c5d61f08e54e483c8bdb0261" exitCode=0 Apr 23 18:51:23.651571 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:23.651527 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kl8lw/must-gather-nd9hx" event={"ID":"1f942549-5168-4c2b-a167-28a80a862d03","Type":"ContainerDied","Data":"bb0378b14c284dd48c4523303853466df8f6d4d0c5d61f08e54e483c8bdb0261"} Apr 23 18:51:23.651979 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:23.651850 2572 scope.go:117] "RemoveContainer" containerID="bb0378b14c284dd48c4523303853466df8f6d4d0c5d61f08e54e483c8bdb0261" Apr 23 18:51:24.329400 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:24.329371 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kl8lw_must-gather-nd9hx_1f942549-5168-4c2b-a167-28a80a862d03/gather/0.log" Apr 23 18:51:27.558906 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:27.558873 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_global-pull-secret-syncer-cc6xb_33d63faf-1112-4576-8025-7dc6736d218a/global-pull-secret-syncer/0.log" Apr 23 18:51:27.740980 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:27.740941 2572 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-febb8-7bb468f96c-5bnl7" podUID="e49426a4-15c8-4789-8200-264dc7c9c076" containerName="switch-graph-febb8" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:51:27.763209 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:27.763180 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_konnectivity-agent-4vkdl_037e05f6-1827-4968-abeb-530665aa07ab/konnectivity-agent/0.log" Apr 23 18:51:27.823731 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:27.823647 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_kube-apiserver-proxy-ip-10-0-130-162.ec2.internal_55d1c989d003a1c5d6c5adfec051c073/haproxy/0.log" Apr 23 18:51:29.194434 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:29.194411 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-febb8-7bb468f96c-5bnl7" Apr 23 18:51:29.303094 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:29.303048 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e49426a4-15c8-4789-8200-264dc7c9c076-openshift-service-ca-bundle\") pod \"e49426a4-15c8-4789-8200-264dc7c9c076\" (UID: \"e49426a4-15c8-4789-8200-264dc7c9c076\") " Apr 23 18:51:29.303266 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:29.303166 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e49426a4-15c8-4789-8200-264dc7c9c076-proxy-tls\") pod \"e49426a4-15c8-4789-8200-264dc7c9c076\" (UID: \"e49426a4-15c8-4789-8200-264dc7c9c076\") " Apr 23 18:51:29.303478 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:29.303445 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e49426a4-15c8-4789-8200-264dc7c9c076-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "e49426a4-15c8-4789-8200-264dc7c9c076" (UID: "e49426a4-15c8-4789-8200-264dc7c9c076"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:51:29.305122 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:29.305101 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e49426a4-15c8-4789-8200-264dc7c9c076-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e49426a4-15c8-4789-8200-264dc7c9c076" (UID: "e49426a4-15c8-4789-8200-264dc7c9c076"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:51:29.404021 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:29.403988 2572 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e49426a4-15c8-4789-8200-264dc7c9c076-openshift-service-ca-bundle\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:51:29.404021 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:29.404019 2572 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e49426a4-15c8-4789-8200-264dc7c9c076-proxy-tls\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:51:29.672173 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:29.672069 2572 generic.go:358] "Generic (PLEG): container finished" podID="e49426a4-15c8-4789-8200-264dc7c9c076" containerID="dafaf78d78f1e1dcf095c564c69a7faf93f47f4d71fc7a37138f3796592fd2e0" exitCode=0 Apr 23 18:51:29.672173 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:29.672116 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-febb8-7bb468f96c-5bnl7" event={"ID":"e49426a4-15c8-4789-8200-264dc7c9c076","Type":"ContainerDied","Data":"dafaf78d78f1e1dcf095c564c69a7faf93f47f4d71fc7a37138f3796592fd2e0"} Apr 23 18:51:29.672173 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:29.672129 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-febb8-7bb468f96c-5bnl7" Apr 23 18:51:29.672173 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:29.672142 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-febb8-7bb468f96c-5bnl7" event={"ID":"e49426a4-15c8-4789-8200-264dc7c9c076","Type":"ContainerDied","Data":"b48c1e690f6819f4e4ec81cfeb1b9a367553e372116ef7e0dac11db0828ec132"} Apr 23 18:51:29.672492 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:29.672162 2572 scope.go:117] "RemoveContainer" containerID="dafaf78d78f1e1dcf095c564c69a7faf93f47f4d71fc7a37138f3796592fd2e0" Apr 23 18:51:29.679564 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:29.679545 2572 scope.go:117] "RemoveContainer" containerID="dafaf78d78f1e1dcf095c564c69a7faf93f47f4d71fc7a37138f3796592fd2e0" Apr 23 18:51:29.679853 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:51:29.679835 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dafaf78d78f1e1dcf095c564c69a7faf93f47f4d71fc7a37138f3796592fd2e0\": container with ID starting with dafaf78d78f1e1dcf095c564c69a7faf93f47f4d71fc7a37138f3796592fd2e0 not found: ID does not exist" containerID="dafaf78d78f1e1dcf095c564c69a7faf93f47f4d71fc7a37138f3796592fd2e0" Apr 23 18:51:29.679916 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:29.679865 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dafaf78d78f1e1dcf095c564c69a7faf93f47f4d71fc7a37138f3796592fd2e0"} err="failed to get container status \"dafaf78d78f1e1dcf095c564c69a7faf93f47f4d71fc7a37138f3796592fd2e0\": rpc error: code = NotFound desc = could not find container \"dafaf78d78f1e1dcf095c564c69a7faf93f47f4d71fc7a37138f3796592fd2e0\": container with ID starting with dafaf78d78f1e1dcf095c564c69a7faf93f47f4d71fc7a37138f3796592fd2e0 not found: ID does not exist" Apr 23 18:51:29.689537 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:29.689517 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/switch-graph-febb8-7bb468f96c-5bnl7"] Apr 23 18:51:29.692928 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:29.692906 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/switch-graph-febb8-7bb468f96c-5bnl7"] Apr 23 18:51:29.783832 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:29.783798 2572 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kl8lw/must-gather-nd9hx"] Apr 23 18:51:29.784022 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:29.784001 2572 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-must-gather-kl8lw/must-gather-nd9hx" podUID="1f942549-5168-4c2b-a167-28a80a862d03" containerName="copy" containerID="cri-o://c59f73496abefb0fdc7dde2ffe2509be6cb4f87d5df3c4c797c09743eaa59e00" gracePeriod=2 Apr 23 18:51:29.790072 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:29.790048 2572 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kl8lw/must-gather-nd9hx"] Apr 23 18:51:30.003034 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:30.003013 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kl8lw_must-gather-nd9hx_1f942549-5168-4c2b-a167-28a80a862d03/copy/0.log" Apr 23 18:51:30.003337 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:30.003301 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kl8lw/must-gather-nd9hx" Apr 23 18:51:30.005706 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:30.005678 2572 status_manager.go:895] "Failed to get status for pod" podUID="1f942549-5168-4c2b-a167-28a80a862d03" pod="openshift-must-gather-kl8lw/must-gather-nd9hx" err="pods \"must-gather-nd9hx\" is forbidden: User \"system:node:ip-10-0-130-162.ec2.internal\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-kl8lw\": no relationship found between node 'ip-10-0-130-162.ec2.internal' and this object" Apr 23 18:51:30.108600 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:30.108551 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td24f\" (UniqueName: \"kubernetes.io/projected/1f942549-5168-4c2b-a167-28a80a862d03-kube-api-access-td24f\") pod \"1f942549-5168-4c2b-a167-28a80a862d03\" (UID: \"1f942549-5168-4c2b-a167-28a80a862d03\") " Apr 23 18:51:30.108759 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:30.108693 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1f942549-5168-4c2b-a167-28a80a862d03-must-gather-output\") pod \"1f942549-5168-4c2b-a167-28a80a862d03\" (UID: \"1f942549-5168-4c2b-a167-28a80a862d03\") " Apr 23 18:51:30.110189 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:30.110156 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f942549-5168-4c2b-a167-28a80a862d03-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "1f942549-5168-4c2b-a167-28a80a862d03" (UID: "1f942549-5168-4c2b-a167-28a80a862d03"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 23 18:51:30.110731 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:30.110710 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f942549-5168-4c2b-a167-28a80a862d03-kube-api-access-td24f" (OuterVolumeSpecName: "kube-api-access-td24f") pod "1f942549-5168-4c2b-a167-28a80a862d03" (UID: "1f942549-5168-4c2b-a167-28a80a862d03"). InnerVolumeSpecName "kube-api-access-td24f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:51:30.209596 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:30.209502 2572 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1f942549-5168-4c2b-a167-28a80a862d03-must-gather-output\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:51:30.209596 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:30.209543 2572 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-td24f\" (UniqueName: \"kubernetes.io/projected/1f942549-5168-4c2b-a167-28a80a862d03-kube-api-access-td24f\") on node \"ip-10-0-130-162.ec2.internal\" DevicePath \"\"" Apr 23 18:51:30.678025 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:30.677996 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kl8lw_must-gather-nd9hx_1f942549-5168-4c2b-a167-28a80a862d03/copy/0.log" Apr 23 18:51:30.678365 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:30.678313 2572 generic.go:358] "Generic (PLEG): container finished" podID="1f942549-5168-4c2b-a167-28a80a862d03" containerID="c59f73496abefb0fdc7dde2ffe2509be6cb4f87d5df3c4c797c09743eaa59e00" exitCode=143 Apr 23 18:51:30.678365 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:30.678362 2572 scope.go:117] "RemoveContainer" containerID="c59f73496abefb0fdc7dde2ffe2509be6cb4f87d5df3c4c797c09743eaa59e00" Apr 23 18:51:30.678495 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:30.678374 2572 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kl8lw/must-gather-nd9hx" Apr 23 18:51:30.680897 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:30.680870 2572 status_manager.go:895] "Failed to get status for pod" podUID="1f942549-5168-4c2b-a167-28a80a862d03" pod="openshift-must-gather-kl8lw/must-gather-nd9hx" err="pods \"must-gather-nd9hx\" is forbidden: User \"system:node:ip-10-0-130-162.ec2.internal\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-kl8lw\": no relationship found between node 'ip-10-0-130-162.ec2.internal' and this object" Apr 23 18:51:30.686592 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:30.686542 2572 scope.go:117] "RemoveContainer" containerID="bb0378b14c284dd48c4523303853466df8f6d4d0c5d61f08e54e483c8bdb0261" Apr 23 18:51:30.692738 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:30.692713 2572 status_manager.go:895] "Failed to get status for pod" podUID="1f942549-5168-4c2b-a167-28a80a862d03" pod="openshift-must-gather-kl8lw/must-gather-nd9hx" err="pods \"must-gather-nd9hx\" is forbidden: User \"system:node:ip-10-0-130-162.ec2.internal\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-kl8lw\": no relationship found between node 'ip-10-0-130-162.ec2.internal' and this object" Apr 23 18:51:30.698967 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:30.698944 2572 scope.go:117] "RemoveContainer" containerID="c59f73496abefb0fdc7dde2ffe2509be6cb4f87d5df3c4c797c09743eaa59e00" Apr 23 18:51:30.699238 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:51:30.699221 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c59f73496abefb0fdc7dde2ffe2509be6cb4f87d5df3c4c797c09743eaa59e00\": container with ID starting with c59f73496abefb0fdc7dde2ffe2509be6cb4f87d5df3c4c797c09743eaa59e00 not found: ID does not exist" containerID="c59f73496abefb0fdc7dde2ffe2509be6cb4f87d5df3c4c797c09743eaa59e00" Apr 23 18:51:30.699286 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:30.699246 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c59f73496abefb0fdc7dde2ffe2509be6cb4f87d5df3c4c797c09743eaa59e00"} err="failed to get container status \"c59f73496abefb0fdc7dde2ffe2509be6cb4f87d5df3c4c797c09743eaa59e00\": rpc error: code = NotFound desc = could not find container \"c59f73496abefb0fdc7dde2ffe2509be6cb4f87d5df3c4c797c09743eaa59e00\": container with ID starting with c59f73496abefb0fdc7dde2ffe2509be6cb4f87d5df3c4c797c09743eaa59e00 not found: ID does not exist" Apr 23 18:51:30.699286 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:30.699263 2572 scope.go:117] "RemoveContainer" containerID="bb0378b14c284dd48c4523303853466df8f6d4d0c5d61f08e54e483c8bdb0261" Apr 23 18:51:30.699503 ip-10-0-130-162 kubenswrapper[2572]: E0423 18:51:30.699486 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb0378b14c284dd48c4523303853466df8f6d4d0c5d61f08e54e483c8bdb0261\": container with ID starting with bb0378b14c284dd48c4523303853466df8f6d4d0c5d61f08e54e483c8bdb0261 not found: ID does not exist" containerID="bb0378b14c284dd48c4523303853466df8f6d4d0c5d61f08e54e483c8bdb0261" Apr 23 18:51:30.699546 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:30.699508 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb0378b14c284dd48c4523303853466df8f6d4d0c5d61f08e54e483c8bdb0261"} err="failed to get container status \"bb0378b14c284dd48c4523303853466df8f6d4d0c5d61f08e54e483c8bdb0261\": rpc error: code = NotFound desc = could not find container \"bb0378b14c284dd48c4523303853466df8f6d4d0c5d61f08e54e483c8bdb0261\": container with ID starting with bb0378b14c284dd48c4523303853466df8f6d4d0c5d61f08e54e483c8bdb0261 not found: ID does not exist" Apr 23 18:51:31.308271 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:31.308232 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f942549-5168-4c2b-a167-28a80a862d03" path="/var/lib/kubelet/pods/1f942549-5168-4c2b-a167-28a80a862d03/volumes" Apr 23 18:51:31.308654 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:31.308645 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e49426a4-15c8-4789-8200-264dc7c9c076" path="/var/lib/kubelet/pods/e49426a4-15c8-4789-8200-264dc7c9c076/volumes" Apr 23 18:51:31.620227 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:31.620155 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-b6xsg_9571f146-c9fe-45ac-b2a7-1f4153d46c32/node-exporter/0.log" Apr 23 18:51:31.653381 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:31.653353 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-b6xsg_9571f146-c9fe-45ac-b2a7-1f4153d46c32/kube-rbac-proxy/0.log" Apr 23 18:51:31.672695 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:31.672672 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-b6xsg_9571f146-c9fe-45ac-b2a7-1f4153d46c32/init-textfile/0.log" Apr 23 18:51:34.490552 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:34.490521 2572 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8jlv4/perf-node-gather-daemonset-w9fbb"] Apr 23 18:51:34.490926 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:34.490796 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1f942549-5168-4c2b-a167-28a80a862d03" containerName="gather" Apr 23 18:51:34.490926 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:34.490807 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f942549-5168-4c2b-a167-28a80a862d03" containerName="gather" Apr 23 18:51:34.490926 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:34.490816 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1f942549-5168-4c2b-a167-28a80a862d03" containerName="copy" Apr 23 18:51:34.490926 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:34.490821 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f942549-5168-4c2b-a167-28a80a862d03" containerName="copy" Apr 23 18:51:34.490926 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:34.490832 2572 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e49426a4-15c8-4789-8200-264dc7c9c076" containerName="switch-graph-febb8" Apr 23 18:51:34.490926 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:34.490838 2572 state_mem.go:107] "Deleted CPUSet assignment" podUID="e49426a4-15c8-4789-8200-264dc7c9c076" containerName="switch-graph-febb8" Apr 23 18:51:34.490926 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:34.490884 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="e49426a4-15c8-4789-8200-264dc7c9c076" containerName="switch-graph-febb8" Apr 23 18:51:34.490926 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:34.490890 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="1f942549-5168-4c2b-a167-28a80a862d03" containerName="gather" Apr 23 18:51:34.490926 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:34.490896 2572 memory_manager.go:356] "RemoveStaleState removing state" podUID="1f942549-5168-4c2b-a167-28a80a862d03" containerName="copy" Apr 23 18:51:34.496008 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:34.495992 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8jlv4/perf-node-gather-daemonset-w9fbb" Apr 23 18:51:34.499204 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:34.499175 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-8jlv4\"/\"kube-root-ca.crt\"" Apr 23 18:51:34.499204 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:34.499196 2572 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-8jlv4\"/\"default-dockercfg-kt7rv\"" Apr 23 18:51:34.499420 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:34.499198 2572 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-8jlv4\"/\"openshift-service-ca.crt\"" Apr 23 18:51:34.499483 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:34.499444 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8jlv4/perf-node-gather-daemonset-w9fbb"] Apr 23 18:51:34.640083 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:34.640045 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nj5j\" (UniqueName: \"kubernetes.io/projected/741c4a51-7fd8-49aa-a6c4-0e777acf37f0-kube-api-access-4nj5j\") pod \"perf-node-gather-daemonset-w9fbb\" (UID: \"741c4a51-7fd8-49aa-a6c4-0e777acf37f0\") " pod="openshift-must-gather-8jlv4/perf-node-gather-daemonset-w9fbb" Apr 23 18:51:34.640252 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:34.640108 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/741c4a51-7fd8-49aa-a6c4-0e777acf37f0-podres\") pod \"perf-node-gather-daemonset-w9fbb\" (UID: \"741c4a51-7fd8-49aa-a6c4-0e777acf37f0\") " pod="openshift-must-gather-8jlv4/perf-node-gather-daemonset-w9fbb" Apr 23 18:51:34.640252 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:34.640177 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/741c4a51-7fd8-49aa-a6c4-0e777acf37f0-sys\") pod \"perf-node-gather-daemonset-w9fbb\" (UID: \"741c4a51-7fd8-49aa-a6c4-0e777acf37f0\") " pod="openshift-must-gather-8jlv4/perf-node-gather-daemonset-w9fbb" Apr 23 18:51:34.640252 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:34.640235 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/741c4a51-7fd8-49aa-a6c4-0e777acf37f0-lib-modules\") pod \"perf-node-gather-daemonset-w9fbb\" (UID: \"741c4a51-7fd8-49aa-a6c4-0e777acf37f0\") " pod="openshift-must-gather-8jlv4/perf-node-gather-daemonset-w9fbb" Apr 23 18:51:34.640400 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:34.640257 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/741c4a51-7fd8-49aa-a6c4-0e777acf37f0-proc\") pod \"perf-node-gather-daemonset-w9fbb\" (UID: \"741c4a51-7fd8-49aa-a6c4-0e777acf37f0\") " pod="openshift-must-gather-8jlv4/perf-node-gather-daemonset-w9fbb" Apr 23 18:51:34.741615 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:34.741526 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/741c4a51-7fd8-49aa-a6c4-0e777acf37f0-podres\") pod \"perf-node-gather-daemonset-w9fbb\" (UID: \"741c4a51-7fd8-49aa-a6c4-0e777acf37f0\") " pod="openshift-must-gather-8jlv4/perf-node-gather-daemonset-w9fbb" Apr 23 18:51:34.741615 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:34.741579 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/741c4a51-7fd8-49aa-a6c4-0e777acf37f0-sys\") pod \"perf-node-gather-daemonset-w9fbb\" (UID: \"741c4a51-7fd8-49aa-a6c4-0e777acf37f0\") " pod="openshift-must-gather-8jlv4/perf-node-gather-daemonset-w9fbb" Apr 23 18:51:34.741615 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:34.741607 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/741c4a51-7fd8-49aa-a6c4-0e777acf37f0-lib-modules\") pod \"perf-node-gather-daemonset-w9fbb\" (UID: \"741c4a51-7fd8-49aa-a6c4-0e777acf37f0\") " pod="openshift-must-gather-8jlv4/perf-node-gather-daemonset-w9fbb" Apr 23 18:51:34.741889 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:34.741632 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/741c4a51-7fd8-49aa-a6c4-0e777acf37f0-proc\") pod \"perf-node-gather-daemonset-w9fbb\" (UID: \"741c4a51-7fd8-49aa-a6c4-0e777acf37f0\") " pod="openshift-must-gather-8jlv4/perf-node-gather-daemonset-w9fbb" Apr 23 18:51:34.741889 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:34.741664 2572 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4nj5j\" (UniqueName: \"kubernetes.io/projected/741c4a51-7fd8-49aa-a6c4-0e777acf37f0-kube-api-access-4nj5j\") pod \"perf-node-gather-daemonset-w9fbb\" (UID: \"741c4a51-7fd8-49aa-a6c4-0e777acf37f0\") " pod="openshift-must-gather-8jlv4/perf-node-gather-daemonset-w9fbb" Apr 23 18:51:34.741889 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:34.741705 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/741c4a51-7fd8-49aa-a6c4-0e777acf37f0-podres\") pod \"perf-node-gather-daemonset-w9fbb\" (UID: \"741c4a51-7fd8-49aa-a6c4-0e777acf37f0\") " pod="openshift-must-gather-8jlv4/perf-node-gather-daemonset-w9fbb" Apr 23 18:51:34.741889 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:34.741723 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/741c4a51-7fd8-49aa-a6c4-0e777acf37f0-proc\") pod \"perf-node-gather-daemonset-w9fbb\" (UID: \"741c4a51-7fd8-49aa-a6c4-0e777acf37f0\") " pod="openshift-must-gather-8jlv4/perf-node-gather-daemonset-w9fbb" Apr 23 18:51:34.741889 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:34.741737 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/741c4a51-7fd8-49aa-a6c4-0e777acf37f0-lib-modules\") pod \"perf-node-gather-daemonset-w9fbb\" (UID: \"741c4a51-7fd8-49aa-a6c4-0e777acf37f0\") " pod="openshift-must-gather-8jlv4/perf-node-gather-daemonset-w9fbb" Apr 23 18:51:34.741889 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:34.741742 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/741c4a51-7fd8-49aa-a6c4-0e777acf37f0-sys\") pod \"perf-node-gather-daemonset-w9fbb\" (UID: \"741c4a51-7fd8-49aa-a6c4-0e777acf37f0\") " pod="openshift-must-gather-8jlv4/perf-node-gather-daemonset-w9fbb" Apr 23 18:51:34.750127 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:34.750110 2572 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nj5j\" (UniqueName: \"kubernetes.io/projected/741c4a51-7fd8-49aa-a6c4-0e777acf37f0-kube-api-access-4nj5j\") pod \"perf-node-gather-daemonset-w9fbb\" (UID: \"741c4a51-7fd8-49aa-a6c4-0e777acf37f0\") " pod="openshift-must-gather-8jlv4/perf-node-gather-daemonset-w9fbb" Apr 23 18:51:34.806784 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:34.806747 2572 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8jlv4/perf-node-gather-daemonset-w9fbb" Apr 23 18:51:34.920943 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:34.920910 2572 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8jlv4/perf-node-gather-daemonset-w9fbb"] Apr 23 18:51:34.924142 ip-10-0-130-162 kubenswrapper[2572]: W0423 18:51:34.924100 2572 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod741c4a51_7fd8_49aa_a6c4_0e777acf37f0.slice/crio-a9334c7976fc7212e697d65863539571f4956d247a942241a50c2bbed924e3f5 WatchSource:0}: Error finding container a9334c7976fc7212e697d65863539571f4956d247a942241a50c2bbed924e3f5: Status 404 returned error can't find the container with id a9334c7976fc7212e697d65863539571f4956d247a942241a50c2bbed924e3f5 Apr 23 18:51:35.412219 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:35.412138 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-85kpl_3be8ea05-2624-4d7d-a9b5-24df2e7b7e43/dns/0.log" Apr 23 18:51:35.430732 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:35.430685 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-85kpl_3be8ea05-2624-4d7d-a9b5-24df2e7b7e43/kube-rbac-proxy/0.log" Apr 23 18:51:35.500020 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:35.499992 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-mggbx_4a6da2ea-0b58-4e0b-957b-258095c2f013/dns-node-resolver/0.log" Apr 23 18:51:35.695062 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:35.694971 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8jlv4/perf-node-gather-daemonset-w9fbb" event={"ID":"741c4a51-7fd8-49aa-a6c4-0e777acf37f0","Type":"ContainerStarted","Data":"e97bcfef6a8d66c0b1098f49eace63b1dbd44d8dfb51166dd090f3d2cadd07ce"} Apr 23 18:51:35.695062 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:35.695012 2572 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8jlv4/perf-node-gather-daemonset-w9fbb" event={"ID":"741c4a51-7fd8-49aa-a6c4-0e777acf37f0","Type":"ContainerStarted","Data":"a9334c7976fc7212e697d65863539571f4956d247a942241a50c2bbed924e3f5"} Apr 23 18:51:35.695237 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:35.695104 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-must-gather-8jlv4/perf-node-gather-daemonset-w9fbb" Apr 23 18:51:35.712581 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:35.712528 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8jlv4/perf-node-gather-daemonset-w9fbb" podStartSLOduration=1.712515346 podStartE2EDuration="1.712515346s" podCreationTimestamp="2026-04-23 18:51:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:51:35.711486599 +0000 UTC m=+3203.009732732" watchObservedRunningTime="2026-04-23 18:51:35.712515346 +0000 UTC m=+3203.010761478" Apr 23 18:51:35.978355 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:35.978252 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-9d9xv_465ff8c4-e8a9-4cb7-8353-e5f7d5a8b986/node-ca/0.log" Apr 23 18:51:37.030391 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:37.030355 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-hhchc_df81e11f-ec7c-402f-b956-c59eab2eebbf/serve-healthcheck-canary/0.log" Apr 23 18:51:37.598102 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:37.598066 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-mkvqp_50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb/kube-rbac-proxy/0.log" Apr 23 18:51:37.614954 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:37.614927 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-mkvqp_50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb/exporter/0.log" Apr 23 18:51:37.633652 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:37.633624 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-mkvqp_50f8ac27-9441-4fd8-88e9-aa3bdd1f22cb/extractor/0.log" Apr 23 18:51:39.559828 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:39.559755 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_kserve-controller-manager-874ff48d-qgp2f_76e8acbe-712b-420e-b188-828623f502f4/manager/0.log" Apr 23 18:51:39.582741 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:39.582708 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_llmisvc-controller-manager-68cc5db7c4-wjh5t_d81180b6-523c-4ee4-98f1-28b491a846d0/manager/0.log" Apr 23 18:51:39.601454 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:39.601433 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_model-serving-api-86f7b4b499-q8wp2_20ae28b4-7d39-4b0c-829c-58055b904524/server/0.log" Apr 23 18:51:40.064290 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:40.064262 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_odh-model-controller-696fc77849-w68zd_18066472-d711-4871-8c9c-78d6c6b3ebe5/manager/0.log" Apr 23 18:51:40.082048 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:40.082020 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_s3-init-cj7j6_3dec7eb0-fce9-4a8c-ba06-5acf0b5ce44f/s3-init/0.log" Apr 23 18:51:40.109268 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:40.109235 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_seaweedfs-86cc847c5c-7755d_e729f255-97fe-411d-a1cf-80d1439e063f/seaweedfs/0.log" Apr 23 18:51:41.707794 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:41.707769 2572 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-8jlv4/perf-node-gather-daemonset-w9fbb" Apr 23 18:51:45.554447 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:45.554364 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-69j8s_e9b51bff-ccdd-40d2-a1c6-c65fe8cff43a/kube-multus/0.log" Apr 23 18:51:45.919619 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:45.919535 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lg6b8_8f18ab0b-c24e-4d53-9d15-941a178305d9/kube-multus-additional-cni-plugins/0.log" Apr 23 18:51:45.939009 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:45.938980 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lg6b8_8f18ab0b-c24e-4d53-9d15-941a178305d9/egress-router-binary-copy/0.log" Apr 23 18:51:45.959873 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:45.959850 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lg6b8_8f18ab0b-c24e-4d53-9d15-941a178305d9/cni-plugins/0.log" Apr 23 18:51:45.977746 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:45.977723 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lg6b8_8f18ab0b-c24e-4d53-9d15-941a178305d9/bond-cni-plugin/0.log" Apr 23 18:51:45.997742 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:45.997718 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lg6b8_8f18ab0b-c24e-4d53-9d15-941a178305d9/routeoverride-cni/0.log" Apr 23 18:51:46.023139 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:46.023118 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lg6b8_8f18ab0b-c24e-4d53-9d15-941a178305d9/whereabouts-cni-bincopy/0.log" Apr 23 18:51:46.042179 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:46.042148 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lg6b8_8f18ab0b-c24e-4d53-9d15-941a178305d9/whereabouts-cni/0.log" Apr 23 18:51:46.190929 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:46.190851 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-nh2kn_d9157db1-0537-4915-a273-5b7a482bc173/network-metrics-daemon/0.log" Apr 23 18:51:46.205503 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:46.205476 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-nh2kn_d9157db1-0537-4915-a273-5b7a482bc173/kube-rbac-proxy/0.log" Apr 23 18:51:47.271364 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:47.271337 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rz688_ba95391c-a044-45b6-b86c-e5c745e4e7d1/ovn-controller/0.log" Apr 23 18:51:47.326949 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:47.326915 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rz688_ba95391c-a044-45b6-b86c-e5c745e4e7d1/ovn-acl-logging/0.log" Apr 23 18:51:47.352868 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:47.352842 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rz688_ba95391c-a044-45b6-b86c-e5c745e4e7d1/kube-rbac-proxy-node/0.log" Apr 23 18:51:47.373642 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:47.373621 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rz688_ba95391c-a044-45b6-b86c-e5c745e4e7d1/kube-rbac-proxy-ovn-metrics/0.log" Apr 23 18:51:47.402607 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:47.402582 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rz688_ba95391c-a044-45b6-b86c-e5c745e4e7d1/northd/0.log" Apr 23 18:51:47.420930 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:47.420909 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rz688_ba95391c-a044-45b6-b86c-e5c745e4e7d1/nbdb/0.log" Apr 23 18:51:47.448775 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:47.448755 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rz688_ba95391c-a044-45b6-b86c-e5c745e4e7d1/sbdb/0.log" Apr 23 18:51:47.632701 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:47.632619 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rz688_ba95391c-a044-45b6-b86c-e5c745e4e7d1/ovnkube-controller/0.log" Apr 23 18:51:48.933704 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:48.933676 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-target-dpfbr_17e9a772-9316-4c67-bffe-e44ea2915f0f/network-check-target-container/0.log" Apr 23 18:51:49.836549 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:49.836517 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_iptables-alerter-vdzr8_41511472-f1af-4c98-ab11-9729dc21519e/iptables-alerter/0.log" Apr 23 18:51:50.448244 ip-10-0-130-162 kubenswrapper[2572]: I0423 18:51:50.448215 2572 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_tuned-rf4xp_6985296e-1df6-4584-8a29-5fb68230893f/tuned/0.log"